Replies: 1 comment
-
|
Seem like an issue with assumed parity as discussed here: #15232 (comment) Assuming you had 100% utilization of 4x20 raidz2 pool (probably unrealistic), with new parity you will be able to write: If you will do a send/recv your for your old data - you can improve space efficiency, but not free space reporting. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
True NAS Scale 25.04.2.4
Initially we started the system with 4 Nos 20TB disks in raidz2 configuration.
Only 1 single pool is created
As the pool was about to get full, to increase the pool size we added additional 5 disks to the pool
They were added using the extend vdev method
Once 1 disk was added and volume expansion was completed, next disk was added
We are not able to get the full usable space after expansion.
Instead of expected 120TiB the GUI shows only 79.42TiB.
When we try to copy the data, it does not allow to copy beyond that.
All the 9 disks are online.
zpool list gives following
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Data 164T 128T 35.4T - - 7% 78% 1.00x ONLINE /mnt
boot-pool 232G 3.80G 228G - - 9% 1% 1.00x ONLINE -
zpool status given following output
pool: Data
state: ONLINE
scan: scrub canceled on Thu Oct 23 10:35:34 2025
expand: expanded raidz2-0 copied 55.3T in 23:40:16, on Wed Oct 8 21:31:21 2025
config:
errors: No known data errors
pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:10 with 0 errors on Wed Oct 29 03:45:12 2025
config:
errors: No known data errors
Storage dashboard gives 79.42 TiB as usable space
Requesting some one to provide a resolution for the same
Beta Was this translation helpful? Give feedback.
All reactions