qemu-img: make is_allocated_sectors() more efficient

Consider the case when the whole buffer is zero and end is unaligned.

If i <= tail, we return 1 and do one unaligned WRITE, RMW happens.

If i > tail, we do on aligned WRITE_ZERO (or skip if target is zeroed)
and again one unaligned WRITE, RMW happens.

Let's do better: don't fragment the whole-zero buffer and report it as
ZERO: in case of zeroed target we just do nothing and avoid RMW. If
target is not zeroes, one unaligned WRITE_ZERO should not be much worse
than one unaligned WRITE.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20211217164654.1184218-3-vsementsov@virtuozzo.com>
Tested-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This commit is contained in:
Vladimir Sementsov-Ogievskiy
2021-12-17 17:46:54 +01:00
committed by Kevin Wolf
parent 51cd8bddd6
commit 96054c76ff
2 changed files with 21 additions and 10 deletions

View File

@ -1171,19 +1171,34 @@ static int is_allocated_sectors(const uint8_t *buf, int n, int *pnum,
}
}
if (i == n) {
/*
* The whole buf is the same.
* No reason to split it into chunks, so return now.
*/
*pnum = i;
return !is_zero;
}
tail = (sector_num + i) & (alignment - 1);
if (tail) {
if (is_zero && i <= tail) {
/* treat unallocated areas which only consist
* of a small tail as allocated. */
/*
* For sure next sector after i is data, and it will rewrite this
* tail anyway due to RMW. So, let's just write data now.
*/
is_zero = false;
}
if (!is_zero) {
/* align up end offset of allocated areas. */
/* If possible, align up end offset of allocated areas. */
i += alignment - tail;
i = MIN(i, n);
} else {
/* align down end offset of zero areas. */
/*
* For sure next sector after i is data, and it will rewrite this
* tail anyway due to RMW. Better is avoid RMW and write zeroes up
* to aligned bound.
*/
i -= tail;
}
}