Move declarations out of qemu-common.h for functions declared in
utils/ files: e.g. include/qemu/path.h for utils/path.c.
Move inline functions out of qemu-common.h and into new files (e.g.
include/qemu/bcd.h)
Signed-off-by: Veronia Bahaa <veroniabahaa@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef. Since then, we've moved to include qemu/osdep.h
everywhere. Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h. That's in excess of
100KiB of crap most .c files don't actually need.
Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h. Include qapi/error.h in .c files that need it and don't
get it now. Include qapi-types.h in qom/object.h for uint16List.
Update scripts/clean-includes accordingly. Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h
comment quoted above similarly.
This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third. Unfortunately, the number depending on
qapi-types.h shrinks only a little. More work is needed for that one.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
[Fix compilation without the spice devel packages. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
migration:
- postcopy is no longer experimental
- fix a use-after-free in postcopy
- fix a compile warning
# gpg: Signature made Fri 11 Mar 2016 12:29:33 GMT using RSA key ID 854083B6
# gpg: Good signature from "Amit Shah <amit@amitshah.net>"
# gpg: aka "Amit Shah <amit@kernel.org>"
# gpg: aka "Amit Shah <amitshah@gmx.net>"
* remotes/amit-migration/tags/migration-for-2.6-7:
postcopy: Remove the x-
postcopy: listen thread is never joined
migration: fix use-after-free in loadvm_postcopy_handle_run_bh
migration: fix warning for source_return_path_thread
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that QEMU wraps the Win32 sockets methods to automatically
set errno upon failure, there is no reason for callers to use
the socket_error() method. They can rely on accessing errno
even on Win32. Remove all use of socket_error() from general
code, leaving it as a static method in oslib-win32.c only.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Migration of pseries-2.3 doesn't have configuration section. Unfortunately,
QEMU 2.4/2.4.1/2.5 are buggy and always stream and expect the configuration
section, and break migration both ways.
This patch introduces a property which allows to enforce a configuration
section for machines who don't have one.
It can be set at startup:
-machine enforce-config-section=on
or later from the QEMU monitor:
qom-set /machine enforce-config-section on
It is up to the tooling to set or unset this property according to the
version of the QEMU at the other end of the pipe.
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
There is a possibility to hit an assert in qcow2_get_specific_info that
s->qcow_version is undefined. This happens when VM in starting from
suspended state, i.e. it processes incoming migration, and in the same
time 'info block' is called.
The problem is that qcow2_invalidate_cache() closes the image and
memset()s BDRVQcowState in the middle.
The patch moves processing of bdrv_invalidate_cache_all out of
coroutine context for postcopy migration to avoid that. This function
is called with the following stack:
process_incoming_migration_co
qemu_loadvm_state
qemu_loadvm_state_main
loadvm_process_command
loadvm_postcopy_handle_run
Signed-off-by: Denis V. Lunev <den@openvz.org>
Tested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Juan Quintela <quintela@redhat.com>
CC: Amit Shah <amit.shah@redhat.com>
Message-Id: <1456304019-10507-3-git-send-email-den@openvz.org>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
There is a possibility to hit an assert in qcow2_get_specific_info that
s->qcow_version is undefined. This happens when VM in starting from
suspended state, i.e. it processes incoming migration, and in the same
time 'info block' is called.
The problem is that qcow2_invalidate_cache() closes the image and
memset()s BDRVQcowState in the middle.
The patch moves processing of bdrv_invalidate_cache_all out of
coroutine context for standard migration to avoid that.
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Fam Zheng <famz@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Juan Quintela <quintela@redhat.com>
CC: Amit Shah <amit.shah@redhat.com>
Message-Id: <1456304019-10507-2-git-send-email-den@openvz.org>
[Amit: Fix a use-after-free bug]
Signed-off-by: Amit Shah <amit.shah@redhat.com>
In qemu_savevm_state_complete_precopy(), it iterates on each device to add
a json object and transfer related status to destination, while the order
of the last two steps could be refined.
Current order:
json_start_object()
save_section_header()
vmstate_save()
json_end_object()
save_section_footer()
After the change:
json_start_object()
save_section_header()
vmstate_save()
save_section_footer()
json_end_object()
This patch reorder the code to to make it symmetric. No functional change.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Message-Id: <1454626230-16334-1-git-send-email-richard.weiyang@gmail.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
vhost, virtio, pci, pc
Fixes all over the place.
virtio dataplane migration support.
Old q35 machine types removed.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 25 Feb 2016 11:16:46 GMT using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
* remotes/mst/tags/for_upstream: (21 commits)
q35: No need to check gigabyte_align
q35: Remove unused q35-acpi-dsdt.aml file
ich9: Remove enable_tco arguments from init functions
machine: Remove no_tco field
q35: Remove old machine versions
tests/vhost-user-bridge: fix build on 32 bit systems
vring: remove
virtio-scsi: do not use vring in dataplane
virtio-blk: do not use vring in dataplane
virtio-blk: fix "disabled data plane" mode
virtio: export vring_notify as virtio_should_notify
virtio: add AioContext-specific function for host notifiers
vring: make vring_enable_notification return void
block-migration: acquire AioContext as necessary
pci core: function pci_bus_init() cleanup
pci core: function pci_host_bus_register() cleanup
balloon: Use only 'pc-dimm' type dimm for ballooning
virtio-balloon: rewrite get_current_ram_size()
move get_current_ram_size to virtio-balloon.c
vhost-user: don't merge regions with different fds
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is needed because dataplane will run during block migration as well.
The block device migration code is quite liberal in taking the iothread
mutex. For simplicity, keep it the same way, even though one could
actually choose between the BQL (for regular BlockDriverStates) and
the AioContext (for dataplane BlockDriverStates). When the block layer
is made fully thread safe, aio_context_acquire shall go away altogether.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Spice hooks the migration status changes to figure out when to
transmit information to the new spice server; but the migration
status in postcopy doesn't quite fit - the destination starts
running before the end of the source migration.
It's not a case of hanging off the migration status change to
postcopy-active either, since that happens before we stop the
guest CPU.
Fix it by sending a notify just after sending the device state,
and adding a flag that can be tested by the notify receiver.
Symptom:
spice handover doesn't work with the error:
red_worker.c:11540:display_channel_wait_for_migrate_data: timeout
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Message-id: 1456161452-25318-1-git-send-email-dgilbert@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
When using 'migrate -b', we must make sure to take ownership of the
image before writing to it. Otherwise metadata would be thrown away on
migration completion; this was caught by the assertions introduced in
commit 09e0c771.
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Within the if statement, time_spent is assured to be non-zero.
This patch just removes the check on time_spent when calculating mbs.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Although accesses to ram_list.dirty_memory[] use atomics so multiple
threads can safely dirty the bitmap, the data structure is not fully
thread-safe yet.
This patch handles the RAM hotplug case where ram_list.dirty_memory[] is
grown. ram_list.dirty_memory[] is change from a regular bitmap to an
RCU array of pointers to fixed-size bitmap blocks. Threads can continue
accessing bitmap blocks while the array is being extended. See the
comments in the code for an in-depth explanation of struct
DirtyMemoryBlocks.
I have tested that live migration with virtio-blk dataplane works.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <1453728801-5398-2-git-send-email-stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Replace the uint8 softfloat-specific typedef with uint8_t.
This change was made with
find include hw fpu target-* -name '*.[ch]' | xargs sed -i -e 's/\buint8\b/uint8_t/g'
together with manual removal of the typedef definition and
manual fixing of more erroneous uses found via test compilation.
It turns out that the only code using this type is an accidental
use where uint8_t was intended anyway...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Acked-by: Leon Alrae <leon.alrae@imgtec.com>
Acked-by: James Hogan <james.hogan@imgtec.com>
Message-id: 1452603315-27030-7-git-send-email-peter.maydell@linaro.org
So far, live migration with shared storage meant that the image is in a
not-really-ready don't-touch-me state on the destination while the
source is still actively using it, but after completing the migration,
the image was fully opened on both sides. This is bad.
This patch adds a block driver callback to inactivate images on the
source before completing the migration. Inactivation means that it goes
to a state as if it was just live migrated to the qemu instance on the
source (i.e. BDRV_O_INACTIVE is set). You're then supposed to continue
either on the source or on the destination, which takes ownership of the
image.
A typical migration looks like this now with respect to disk images:
1. Destination qemu is started, the image is opened with
BDRV_O_INACTIVE. The image is fully opened on the source.
2. Migration is about to complete. The source flushes the image and
inactivates it. Now both sides have the image opened with
BDRV_O_INACTIVE and are expecting the other side to still modify it.
3. One side (the destination on success) continues and calls
bdrv_invalidate_all() in order to take ownership of the image again.
This removes BDRV_O_INACTIVE on the resuming side; the flag remains
set on the other side.
This ensures that the same image isn't written to by both instances
(unless both are resumed, but then you get what you deserve). This is
important because .bdrv_close for non-BDRV_O_INACTIVE images could write
to the image file, which is definitely forbidden while another host is
using the image.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
This allows to send a partial array where the size is another
structure field multiplied by a constant.
Signed-off-by: Juan Quintela <quintela@redhat.com>
[PMM: updated to current master]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Add vmstate support for migrating arrays of CPU_DoubleU via
VMSTATE_CPUDOUBLE_ARRAY.
Signed-off-by: Juan Quintela <quintela@redhat.com>
[PMM: rebased, since files have all moved since 2012;
added VMSTATE_CPUDOUBLE_ARRAY_V for consistency with FLOAT64]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Error reporting patches for 2016-01-13
# gpg: Signature made Wed 13 Jan 2016 14:21:48 GMT using RSA key ID EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>"
* remotes/armbru/tags/pull-error-2016-01-13: (41 commits)
checkpatch: Detect newlines in error_report and other error functions
error: Consistently name Error * objects err, and not errp
s390/sclp: Simplify control flow in sclp_realize()
hw/s390x: Rename local variables Error *l_err to just err
error: Clean up errors with embedded newlines (again)
vhdx: Fix "log that needs to be replayed" error message
pci-assign: Clean up "Failed to assign" error messages
vmdk: Clean up "Invalid extent lines" error message
vmdk: Clean up control flow in vmdk_parse_extents() a bit
error: Strip trailing '\n' from error string arguments (again)
qemu-io qemu-nbd: Use error_report() etc. instead of fprintf()
migration: Use error_reportf_err() instead of monitor_printf()
spapr: Use error_reportf_err()
error: Use error_prepend() where it makes obvious sense
error: Use error_reportf_err() where it makes obvious sense
error: Don't decorate original error message when adding to it
error: New error_prepend(), error_reportf_err()
test-throttle: Simplify qemu_init_main_loop() error handling
qemu-nbd: Clean up "Failed to load snapshot" error message
block: Clean up "Could not create temporary overlay" error message
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Both error_reportf_err() and monitor_printf() print to the same
destination when monitor_printf() is used correctly, i.e. within an
HMP monitor. Elsewhere, monitor_printf() does nothing, while
error_reportf_err() reports to stderr.
Both changed functions are HMP command handlers. These should only
run within an HMP monitor.
Unlike monitor_printf(), error_reportf_err() uses the error whole
instead of just its message obtained with error_get_pretty(). This
avoids suppressing its hint (see commit 50b7b00), but I don't think
the errors touched in this commit can come with hints.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <1450452927-8346-15-git-send-email-armbru@redhat.com>
Both error_report_err() and monitor_printf() print to the same
destination when monitor_printf() is used correctly, i.e. within an
HMP monitor. Elsewhere, monitor_printf() does nothing, while
error_report_err() reports to stderr.
Most changed functions are HMP command handlers. These should only
run within an HMP monitor. The one exception is bdrv_password_cb(),
which should also only run within an HMP monitor.
Four command handlers prefix the error message with the command name:
balloon, migrate_set_capability, migrate_set_parameter, migrate.
Pointless, drop.
Unlike monitor_printf(), error_report_err() uses the error whole
instead of just its message obtained with error_get_pretty(). This
avoids suppressing its hint (see commit 50b7b00). Example:
(qemu) device_add ivshmem,id=666
Parameter 'id' expects an identifier
Identifiers consist of letters, digits, '-', '.', '_', starting with a letter.
Try "help device_add" for more information
The "Identifiers consist of..." line is new with this patch.
Coccinelle semantic patch:
@@
expression M, E;
@@
- monitor_printf(M, "%s\n", error_get_pretty(E));
- error_free(E);
+ error_report_err(E);
@r1@
expression M, E;
format F;
position p;
@@
- monitor_printf(M, "...%@F@\n", error_get_pretty(E));@p
- error_free(E);
+ error_report_err(E);
@script:python@
p << r1.p;
@@
print "%s:%s:%s: prefix dropped" % (p[0].file, p[0].line, p[0].column)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <1450452927-8346-4-git-send-email-armbru@redhat.com>
Now that we guarantee the user doesn't have any enum values
beginning with a single underscore, we can use that for our
own purposes. Renaming ENUM_MAX to ENUM__MAX makes it obvious
that the sentinel is generated.
This patch was mostly generated by applying a temporary patch:
|diff --git a/scripts/qapi.py b/scripts/qapi.py
|index e6d014b..b862ec9 100644
|--- a/scripts/qapi.py
|+++ b/scripts/qapi.py
|@@ -1570,6 +1570,7 @@ const char *const %(c_name)s_lookup[] = {
| max_index = c_enum_const(name, 'MAX', prefix)
| ret += mcgen('''
| [%(max_index)s] = NULL,
|+// %(max_index)s
| };
| ''',
| max_index=max_index)
then running:
$ cat qapi-{types,event}.c tests/test-qapi-types.c |
sed -n 's,^// \(.*\)MAX,s|\1MAX|\1_MAX|g,p' > list
$ git grep -l _MAX | xargs sed -i -f list
The only things not generated are the changes in scripts/qapi.py.
Rejecting enum members named 'MAX' is now useless, and will be dropped
in the next patch.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <1447836791-369-23-git-send-email-eblake@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
[Rebased to current master, commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
My fix (84e7b80a) replaced the last_sent_block update that I'd
removed earlier; however it was too aggressive in the xbzrle case.
save_xbzrle_page might return '0' to mean that the page didn't
need sending since it was the same as the last sent version;
in this case we can't update 'last_sent_block' since we didn't
actually send it.
Symptom: 'Illegal RAM offset 1018000' as we try and send a page
to the wrong RAMBlock; potentially that could be a data
corruption if you were really unlucky.
Fixes: 84e7b80a05
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-id: 1449765106-6528-1-git-send-email-dgilbert@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Dividing integer expressions transferred_bytes and time_spent, and then converting
the integer quotient to type double. Any remainder, or fractional part of the
quotient, is ignored. Fix this.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
socket_writev_buffer() writes in a loop, using g_poll() to block. If
g_poll() fails, it tries to write more before the file descriptor is
ready. In theory, this could go into a tight loop. In practice,
errors other than EINTR are really unlikely, and when they happen,
we're probably screwed anyway, so we can just as well loop.
Clean it up a bit: retry poll on EINTR, keep ignoring other errors.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
If we set migration speed in a very large value, block-migration will try to read
all data to the memory. Because
(block_mig_state.submitted + block_mig_state.read_done) * BLOCK_SIZE
will be overflow, and it will be always less than rate limit.
There is no need to read too many data into memory when the rate limit is very large.
So limit the memory usage can fix the overflow problem.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
madvise() returns EINVAL in the case of many failures, but also
returns it in cases where the host kernel doesn't have THP enabled.
Postcopy only really cares that THP is off before it detects faults,
and turns it back on afterwards; so we're going to have
to assume that if the madvise fails then the host just doesn't do
THP and we can carry on with the postcopy.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Tested-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>