Replies: 6 comments 19 replies
-
Special devices can be any type, including raidz2 or draid. They just have to match the redundancy level of the pool. You can verify the behavior by making a test pool of sparse files: |
Beta Was this translation helpful? Give feedback.
-
|
Another thing that’s not quite clear to me: if a special vdev is full, will it simply overflow to the regular devices, or will I experience something akin to a regular file system running out of inodes even though there is still plenty of unused disk space? |
Beta Was this translation helpful? Give feedback.
-
|
Interesting question, but why would you even want a single raidz2 ssd metadata vdev?! I get it in large setups where you can have 4 or 5 of those in a huge array. But smallblocks/metadata is mostly intended for 4k-16k R/W, which is iops limited not throughput limited. You much-rather want the iops of multiple ssds instead of your current iops of 1 ssd for the whole raidz2. |
Beta Was this translation helpful? Give feedback.
-
|
Have one more question: in the past, if the main pool was a mirrored pool and the special vdev was (as was the only option) mirrored, the special vdev could be detached, but not if the main pool was a raidzN. If a raidz2 special vdev can be deteached if the main pool is raidz2, do they need to have the same device count (e.g. 8 devices each), or does it also work if e.g. the main pool consists of 5 HDD, and the special vdev consists of 7 SSD? |
Beta Was this translation helpful? Give feedback.
-
|
Looks like the TrueNAS nightlies are already using ZFS2.4RC3. Given that I just migrated all my data from the SSD pool to the HDD pool, I can now nuke the SSD pool and add the freed SSDs as special vdev to the existing pool. If I do that, will the special vdev simply inherit/adopt the pool’s settings (like encryption)? What’s the precise command for adding the ssds? I figure I can see how the distribution of files sizes looks like after a while, and if need be adjust/rewrite. But since I have plenty of space on the special vdev and in essence only want to ban large media files to the HDDs, I think the above should work... |
Beta Was this translation helpful? Give feedback.
-
|
One thing that might need some work is how available capacity is reported. So say I have set the size for small files to 16MB, and I have like 37TB worth of SSD and 80TB worth of HDD (capcities after raidz2 configuration), with the former being used as metadata and small files storage for files up to 16MB. The capcity of the pool is shown as 80TB, and as I keep adding (currently almost exclsively small files, that will change down the road), it looks like I'm taking up no space. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Maybe I’m wrong, but I’m under the impression that currently special vdevs for metadata and small blocks/files can only be of type stripe or mirror. Someone elsewhere explained that historically the feature was implemented to improve draid performance, and of course, if one specifically adds SSDs to a spinning disk array to improve performance, one would use the fastest possible way of doing that, and that’s of course not a raidzN setup. Perfectly reasonable.
However, I come from a completely different angle: I have a SSD RAIDZ2 pool and a spinning disk RAIDZ2 pool, and there is no plan to add additional devices, for space and cost reasons. However, since in both cases metadata is stored on a RAIDZ vdev anyway, it still would speed up things if the spinning disks had their metadata on the SSDs. Similarly, having two pools and having to manually decide which data goes where brings additional overhead.
If I could create a pool where both the SSDs and the spinning disks were RAIDZ2, but the SSDs were used as special vdev for small files/blocks and metadata, then most file ops would be as fast as an SSD RAIDZ2 even on the spinning disks, and I could organize the file system logically, rather than by what files I want on the spinners vs. on the SSDs.
So, coming from that angle, where everything is RAIDZ anyway, and it’s just a question if things end up on slow or fast hardware, having a special vdev as raidz2 suddenly does make sense.
Am I missing something here? If not, how big would be the effort to actually get this implemented, given that both raidz2 logic is fully implemented and special vdev logic is implemented. It would almost seem, recommended practices aside, once a vdev is created, the zpool should be agnostic to how it’s being used. After all, nothing would stop me (except common sense) to use a mirror of spinning disk drives as special vdev for an SSD RAIDZ2 pool. 😆
Beta Was this translation helpful? Give feedback.
All reactions