Add a job that can be run, either manually or on a schedule, to upload
a build to Coverity Scan. The job uses the run-coverity-scan script
in multiple phases of check, download tools and upload, in order to
avoid both wasting time (skip everything if you are above the upload
quota) and avoid filling the log with the progress of downloading
the tools.
The job is intended to run on a scheduled pipeline run, and scheduled
runs will not get any other job. It requires two variables to be in
GitLab CI, COVERITY_TOKEN and COVERITY_EMAIL. Those are already set up
in qemu-project's configuration as protected and masked variables.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add an option to check if upload is permitted without actually
attempting a build. This can be useful to add a third outcome
beyond success and failure---namely, a CI job can self-cancel
if the uploading quota has been reached.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With the introduction of TSTEQ and TSTNE the .mask field is always -1,
so remove all the now-unnecessary code.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The new conditions obviously come in handy when testing individual bits
of EFLAGS, and they make it possible to remove the .mask field of
CCPrepare.
Lowering to shift+and is done by the optimizer if necessary.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When testing the sign bit or equality to zero of a partial register, it
is useful to use a single TSTEQ or TSTNE operation. It can also be used
to test the parity flag, using bit 0 of the population count.
Do not do this for target_ulong-sized values however; the optimizer would
produce a comparison against zero anyway, and it avoids shifts by 64
which are undefined behavior.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add new "select" and "imply" directives if needed. The resulting
config-devices.mak files are the same as before.
Builds without default devices will become much smaller
than before, and qtests fail (as expected, though suboptimal)
for mips64-softmmu because most tests do not use -nodefaults,
so remove it from build-without-defaults
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
touch_all_pages() can return early, before creating threads. In this case,
however, it leaks the MemsetContext that it has allocated at the
beginning of the function.
Reported by Coverity as CID 1534922.
Fixes: 04accf43df ("oslib-posix: initialize backend memory objects in parallel", 2024-02-06)
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
deliver_bitmask is allocated on the heap in apic_deliver(), but there
are many paths in the function that return before the corresponding
g_free() is reached. Fix this by switching to g_autofree and, while at
it, also switch to g_new. Do the same in apic_deliver_irq() as well
for consistency.
Fixes: b5ee0468e9 ("apic: add support for x2APIC mode", 2024-02-14)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Bui Quang Minh <minhquangbui99@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Netbsd isn't happy with qemu lsi53c895a emulation:
cd0(esiop0:0:2:0): command with tag id 0 reset
esiop0: autoconfiguration error: phase mismatch without command
esiop0: autoconfiguration error: unhandled scsi interrupt, sist=0x80 sstat1=0x0 DSA=0x23a64b1 DSP=0x50
This is because lsi_bad_phase() triggers a phase mismatch, which
stops SCRIPT processing. However, after returning to
lsi_command_complete(), SCRIPT is restarted with lsi_resume_script().
Fix this by adding a return value to lsi_bad_phase(), and only resume
script processing when lsi_bad_phase() didn't trigger a host interrupt.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Tested-by: Helge Deller <deller@gmx.de>
Message-ID: <20240302214453.2071388-1-svens@stackframe.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
--warn-common ldflag causes warnings for multiple definitions of
___asan_globals_registered when enabling AddressSanitizer with clang.
The warning is somewhat obsolete so just remove it.
The common block is used to allow duplicate definitions of uninitialized
global variables. In the past, GCC and clang used to place such
variables in a common block by default, which prevented programmers for
noticing accidental duplicate definitions. Commit 49237acdb7 ("Enable
ld flag --warn-common") added --warn-common ldflag so that ld warns in
such a case.
Today, both of GCC and clang don't use common blocks by default[1][2] so
any remaining use of common blocks should be intentional. Remove
--warn-common ldflag to suppress warnings for intentional use of
common blocks.
[1]: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=85678
[2]: https://reviews.llvm.org/D75056
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-ID: <20240304-common-v1-1-1a2005d1f350@daynix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Original goal of addition of drain_call_rcu to qmp_device_add was to cover
the failure case of qdev_device_add. It seems call of drain_call_rcu was
misplaced in 7bed89958b what led to waiting for pending RCU callbacks
under happy path too. What led to overall performance degradation of
qmp_device_add.
In this patch call of drain_call_rcu moved under handling of failure of
qdev_device_add.
Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
Message-ID: <20231103105602.90475-1-ds-gavr@yandex-team.ru>
Fixes: 7bed89958b ("device_core: use drain_call_rcu in in qmp_device_add", 2020-10-12)
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
HP-UX 10.20 seems to make the lsi53c895a spinning on a memory location
under certain circumstances. As the SCSI controller and CPU are not
running at the same time this loop will never finish. After some
time, the check loop interrupts with a unexpected device disconnect.
This works, but is slow because the kernel resets the scsi controller.
Instead of signaling UDC, start a timer and exit the loop. Until the
timer fires, the CPU can process instructions which might changes the
memory location.
The limit of instructions is also reduced because scripts running on
the SCSI processor are usually very short. This keeps the time until
the loop is exit short.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-ID: <20240229204407.1699260-1-svens@stackframe.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The macOS jobs in our CI recently started failing, complaining that
the distutils module is not available anymore. And indeed, according to
https://peps.python.org/pep-0632/ it's been deprecated since a while
and now likely got removed in recent Python versions.
Fortunately, we only use it for a version check via LooseVersion here
which we don't really need anymore - according to Repology.org, these
are the versions of sphinx-rtd-theme that are currently used by the
various distros:
centos_stream_8: 0.3.1
centos_stream_9: 0.5.1
fedora_38: 1.1.1
fedora_39: 1.2.2
freebsd: 1.0.0
haikuports_master: 1.2.1
openbsd: 1.2.2
opensuse_leap_15_5: 0.5.1
pkgsrc_current: 2.0.0
debian_11: 0.5.1
debian_12: 1.2.0
ubuntu_20_04: 0.4.3
ubuntu_22_04: 1.0.0
ubuntu_24_04: 2.0.0
So except for CentOS 8, all distros are using a newer version of
sphinx-rtd-theme, and for CentOS 8 we don't support compiling with
the Sphinx of the distro anymore anyway, since it's based on the
Python 3.6 interpreter there. For compiling on CentOS 8, you have
to use the alternative Python 3.8 interpreter which comes without
Sphinx, so that needs the Sphinx installed via pip in the venv
instead, and that is using a newer version, too, according to our
pythondeps.toml file.
Thus we can simply drop the version check now to get rid of the
distutils dependency here.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Message-id: 20240304130403.129543-1-thuth@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the missing 64-bit hppa firmware blob so that it gets installed.
Signed-off-by: Helge Deller <deller@gmx.de>
Fixes: 7c0dfcf939 ("target/hppa: Update SeaBIOS-hppa to version 16")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
When calculating the IOR for the exception handlers, the current
unwind_breg value is needed on 64-bit hppa machines.
Restore that value by calling cpu_restore_state() earlier, which in turn
calls hppa_restore_state_to_opc() which restores the unwind_breg for the
current instruction.
Signed-off-by: Helge Deller <deller@gmx.de>
Fixes: 3824e0d643 ("target/hppa: Export function hppa_set_ior_and_isr()")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Unaligned 64-bit accesses were found in Linux to clobber carry bits,
resulting in bad results if an arithmetic operation involving a
carry bit was executed after an unaligned 64-bit operation.
hppa 2.0 defines additional carry bits in PSW register bits 32..39.
When restoring PSW after executing an unaligned instruction trap, those
bits were not cleared and ended up to be active all the time. Since there
are no bits other than the upper carry bits needed in the upper 32 bit of
env->psw and since those are stored in env->psw_cb, just clear the entire
upper 32 bit when storing psw to solve the problem unconditionally.
Fixes: 931adff314 ("target/hppa: Update cpu_hppa_get/put_psw for hppa64")
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Charlie Jenkins <charlie@rivosinc.com>
Cc: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Add a regression test for a recently fixed issue, where shmat()
desynced the guest and the host view of the address space and caused
open("/proc/self/maps") to SEGV.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <jwyuvao4apydvykmsnvacwshdgy3ixv7qvkh4dbxm3jkwgnttw@k4wpaayou7oq>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
pull-loongarch-20240229
V2: fix build error on mipsel
# -----BEGIN PGP SIGNATURE-----
#
# iLMEAAEKAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCZeBrwAAKCRBAov/yOSY+
# 33YXA/4+A5Bpe/3+mSAWZSUlluGTqUi0ILBYRMyX1RXovMx4uCRGr7PXzAf03yKS
# MZzlVzTuOK69WmTm/iTdYWOxkXisC3gzxL/wm8hP4lzh4c0dHrHRsKHqq6gR3+t2
# ojdZn7TefeflnNqIhxXxgxb1OETofhBNnBJ74pvqxO7XV5SWnA==
# =J2Kb
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 29 Feb 2024 11:34:24 GMT
# gpg: using RSA key B8FF1DA0D2FDCB2DA09C6C2C40A2FFF239263EDF
# gpg: Good signature from "Song Gao <m17746591750@163.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B8FF 1DA0 D2FD CB2D A09C 6C2C 40A2 FFF2 3926 3EDF
* tag 'pull-loongarch-20240229' of https://gitlab.com/gaosong/qemu:
loongarch: Change the UEFI loading mode to loongarch
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit ffda5db65a ("io/channel-tls: fix handling of bigger read buffers")
changed the behavior of the TLS io channels to schedule a second reading
attempt if there is still incoming data pending. This caused a regression
with backends like the sclpconsole that check in their read function that
the sender does not try to write more bytes to it than the device can
currently handle.
The problem can be reproduced like this:
1) In one terminal, do this:
mkdir qemu-pki
cd qemu-pki
openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 365000 -key ca-key.pem -out ca-cert.pem
# enter some dummy value for the cert
openssl genrsa 2048 > server-key.pem
openssl req -new -x509 -nodes -days 365000 -key server-key.pem \
-out server-cert.pem
# enter some other dummy values for the cert
gnutls-serv --echo --x509cafile ca-cert.pem --x509keyfile server-key.pem \
--x509certfile server-cert.pem -p 8338
2) In another terminal, do this:
wget https://download.fedoraproject.org/pub/fedora-secondary/releases/39/Cloud/s390x/images/Fedora-Cloud-Base-39-1.5.s390x.qcow2
qemu-system-s390x -nographic -nodefaults \
-hda Fedora-Cloud-Base-39-1.5.s390x.qcow2 \
-object tls-creds-x509,id=tls0,endpoint=client,verify-peer=false,dir=$PWD/qemu-pki \
-chardev socket,id=tls_chardev,host=localhost,port=8338,tls-creds=tls0 \
-device sclpconsole,chardev=tls_chardev,id=tls_serial
QEMU then aborts after a second or two with:
qemu-system-s390x: ../hw/char/sclpconsole.c:73: chr_read: Assertion
`size <= SIZE_BUFFER_VT220 - scon->iov_data_len' failed.
Aborted (core dumped)
It looks like the second read does not trigger the chr_can_read() function
to be called before the second read, which should normally always be done
before sending bytes to a character device to see how much it can handle,
so the s->max_size in tcp_chr_read() still contains the old value from the
previous read. Let's make sure that we use the up-to-date value by calling
tcp_chr_read_poll() again here.
Fixes: ffda5db65a ("io/channel-tls: fix handling of bigger read buffers")
Buglink: https://issues.redhat.com/browse/RHEL-24614
Reviewed-by: "Daniel P. Berrangé" <berrange@redhat.com>
Message-ID: <20240229104339.42574-1-thuth@redhat.com>
Reviewed-by: Antoine Damhet <antoine.damhet@blade-group.com>
Tested-by: Antoine Damhet <antoine.damhet@blade-group.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Since Windows text files use CRLFs for all \n, the Windows version of QEMU
inserts a CR in the PCAP stream when a LF is encountered when using USB PCAP
files. This is due to the fact that the PCAP file is opened as TEXT instead
of BINARY.
To show an example, when using a very common protocol to USB disks, the BBB
protocol uses a 10-byte command packet. For example, the READ_CAPACITY(10)
command will have a command block length of 10 (0xA). When this 10-byte
command (part of the 31-byte CBW) is placed into the PCAP file, the Windows
file manager inserts a 0xD before the 0xA, turning the 31-byte CBW into a
32-byte CBW.
Actual CBW:
0040 55 53 42 43 01 00 00 00 08 00 00 00 80 00 0a 25 USBC...........%
0050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ...............
PCAP CBW
0040 55 53 42 43 01 00 00 00 08 00 00 00 80 00 0d 0a USBC............
0050 25 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 %..............
I believe simply opening the PCAP file as BINARY instead of TEXT will fix
this issue.
Resolves: https://bugs.launchpad.net/qemu/+bug/2054889
Signed-off-by: Benjamin David Lunt <benlunt@fysnet.net>
Message-ID: <000101da6823$ce1bbf80$6a533e80$@fysnet.net>
[thuth: Break long line to avoid checkpatch.pl error]
Signed-off-by: Thomas Huth <thuth@redhat.com>
When using "--without-default-devices", the ARM_GICV3_TCG and ARM_GIC_KVM
settings currently get disabled, though the arm virt machine is only of
very limited use in that case. This also causes the migration-test to
fail in such builds. Let's make sure that we always keep the GIC switches
enabled in the --without-default-devices builds, too.
Message-ID: <20240221110059.152665-1-thuth@redhat.com>
Tested-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Thomas Huth <thuth@redhat.com>
In qvring_init() we're writing vq->used->avail_event at "vq->used + 2 +
array_size". The struct pointed by vq->used is, from virtio_ring.h
Linux header):
* // A ring of used descriptor heads with free-running index.
* __virtio16 used_flags;
* __virtio16 used_idx;
* struct vring_used_elem used[num];
* __virtio16 avail_event_idx;
So 'flags' is the word right at vq->used. 'idx' is vq->used + 2. We need
to skip 'used_idx' by adding + 2 bytes, and then sum the vector size, to
reach avail_event_idx. An example on how to properly access this field
can be found in qvirtqueue_kick():
avail_event = qvirtio_readw(d, qts, vq->used + 4 +
sizeof(struct vring_used_elem) * vq->size);
This error was detected when enabling the RISC-V 'virt' libqos machine.
The 'idx' test from vhost-user-blk-test.c errors out with a timeout in
qvirtio_wait_used_elem(). The timeout happens because when processing
the first element, 'avail_event' is read in qvirtqueue_kick() as non-zero
because we didn't initialize it properly (and the memory at that point
happened to be non-zero). 'idx' is 0.
All of this makes this condition fail because "idx - avail_event" will
overflow and be non-zero:
/* < 1 because we add elements to avail queue one by one */
if ((flags & VRING_USED_F_NO_NOTIFY) == 0 &&
(!vq->event || (uint16_t)(idx-avail_event) < 1)) {
d->bus->virtqueue_kick(d, vq);
}
As a result the virtqueue is never kicked and we'll timeout waiting for it.
Fixes: 1053587c3f ("libqos: Added EVENT_IDX support")
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240217192607.32565-3-dbarboza@ventanamicro.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The loop isn't setting the values for the last element. Every other
element is being initialized with addr = 0, flags = VRING_DESC_F_NEXT
and next = i + 1. The last elem is never touched.
This became a problem when enabling a RISC-V 'virt' libqos machine in
the 'indirect' test of virti-blk-test.c. The 'flags' for the last
element will end up being an odd number (since we didn't touch it).
Being an odd number it will be mistaken by VRING_DESC_F_NEXT, which
happens to be 1.
Deep into hw/virt/virtio.c, in virtqueue_split_pop(), into
virtqueue_split_read_next_desc(), a check for VRING_DESC_F_NEXT will be
made to see if we're supposed to chain. The code will keep up chaining
in the last element because the uninitialized value happens to be odd.
We'll error out right after that because desc->next (which is also
uninitialized) will be >= max. A VIRTQUEUE_READ_DESC_ERROR will be
returned, with an error message like this in the stderr:
qemu-system-riscv64: Desc next is 49391
Since we never returned, we'll end up timing out at qvirtio_wait_used_elem():
ERROR:../tests/qtest/libqos/virtio.c:236:qvirtio_wait_used_elem:
assertion failed: (g_get_monotonic_time() - start_time <= timeout_us)
The root cause is using uninitialized values from guest_alloc() in
qvring_indirect_desc_setup(). There's no guarantee that the memory pages
retrieved will be zeroed, so we can't make assumptions. In fact, commit
5b4f72f5e8 ("tests/qtest: properly initialise the vring used idx") fixed a
similar problem stating "It is probably not wise to assume guest memory
is zeroed anyway". I concur.
Initialize all elems in qvring_indirect_desc_setup().
Fixes: f294b029aa ("libqos: Added indirect descriptor support to virtio implementation")
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240217192607.32565-2-dbarboza@ventanamicro.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The kernel abi was changed with
commit d23b77953f5a4fbf94c05157b186aac2a247ae32
Author: Huacai Chen <chenhuacai@kernel.org>
Date: Wed Jan 17 12:43:08 2024 +0800
LoongArch: Change SHMLBA from SZ_64K to PAGE_SIZE
during the v6.8 cycle.
Reviewed-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This is the only case in which we expect to have no host memory backing
for a guest memory page, because in general linux user processes cannot
map any pages in the top half of the 64-bit address space.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2170
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The variables uext_opc and sext_opc are used without initialization if
TCG_TARGET_extract_i{32,64}_valid returns false. The result, depending
on the compiler, might be the generation of extract and sextract opcodes
with invalid offset and count, or just random data in the TCG opcode
stream.
Fixes: ceb9ee06b7 ("tcg/optimize: Handle TCG_COND_TST{EQ,NE}", 2024-02-03)
Cc: Richard Henderson <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20240228110641.287205-1-pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The assertion was never correct, because the alignment is a composite
of the image alignment and SHMLBA. Even if the image alignment didn't
match the image address, an assertion would not be correct -- more
appropriate would be an error message about an ill formed image. But
the image cannot be held to SHMLBA under any circumstances.
Fixes: ee94743034 ("linux-user: completely re-write init_guest_space")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2157
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reported-by: Alexey Sheplyakov <asheplyakov@yandex.ru>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
This option controls the host page size. From the mis-usage in
our own testsuite, this is easily confused with guest page size.
The only thing that occurs when changing the host page size is
that stuff breaks, because one cannot actually change the host
page size. Therefore reject all but the no-op setting as part
of the deprecation process.
Reviewed-by: Warner Losh <imp@bsdimp.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-27-richard.henderson@linaro.org>