View the thread, titled "Understanding TrueNAS JBOD Servers: A Comprehensive Guide" which is posted in Computer and Networking Forum on Electricians Forums.

Dan

Staff member
Admin
Mod
Understanding TrueNAS JBOD Servers: A Comprehensive Guide

In the world of data storage, businesses and tech enthusiasts often face the challenge of managing large volumes of data efficiently. As the demand for scalable and flexible storage solutions continues to grow, systems like TrueNAS and JBOD (Just a Bunch of Disks) have emerged as popular choices. This article aims to explain what TrueNAS JBOD servers are, their benefits, how they work, and why they are a reliable solution for enterprise storage needs.

TrueNAS JBOD Server - 5 Reasons to Use JBOD TrueNAS

What is TrueNAS?​

TrueNAS is an open-source storage platform developed by iXsystems. It is designed to provide high-performance, scalable storage with robust data protection features. TrueNAS is built on the ZFS (Zettabyte File System) file system, which is known for its advanced data management capabilities, including data integrity checks, compression, snapshots, and deduplication. TrueNAS offers both a free community version (TrueNAS CORE) and an enterprise version (TrueNAS SCALE), which extends support for clustering and virtualization features.

TrueNAS can be used for a wide variety of storage needs, from home users to large enterprise-level deployments, supporting configurations ranging from basic single-disk setups to complex multi-petabyte arrays.

What is a JBOD Server?​

JBOD stands for "Just a Bunch of Disks," which refers to a storage configuration that uses multiple hard drives without any RAID (Redundant Array of Independent Disks) configuration. Unlike RAID systems that combine multiple drives into one logical volume for redundancy or performance benefits, JBOD simply groups multiple drives together, allowing each drive to operate independently.

In a JBOD configuration, the drives appear as separate volumes in the system, and data is stored on individual disks. There is no mirroring, striping, or parity. This setup is cost-effective, offering large storage capacity without the need for complex RAID management.

TrueNAS JBOD Servers: Combining TrueNAS and JBOD​


JBOD Storage Server with TrueNAS Software
A TrueNAS JBOD server combines the flexibility of a JBOD configuration with the power of TrueNAS software. The main appeal of such a system is the ability to scale storage easily by adding more drives as needed. This is especially useful for businesses that deal with large datasets and require both flexibility and performance without incurring the cost of higher-end RAID configurations.

In a TrueNAS JBOD setup, each individual disk can be added to the TrueNAS pool, providing the benefits of TrueNAS's powerful file system features while keeping the storage configuration simple. TrueNAS supports JBODs through its ability to use ZFS, which means you can configure storage pools with individual disks and still enjoy advanced features like:
  1. Data Integrity Checks: ZFS ensures that data is accurate and free from corruption. Even when using JBOD, TrueNAS provides checksums to protect the data on each disk.
  2. Snapshots and Clones: Even with JBOD, TrueNAS users can create snapshots of the entire storage pool, enabling quick backups and restores of data.
  3. Compression and Deduplication: TrueNAS can automatically compress and deduplicate data across your storage pool, maximizing efficiency and minimizing space usage.
  4. Scalability: TrueNAS JBOD servers are highly scalable. You can add more drives to the system without worrying about the complexity of traditional RAID arrays.

Benefits of TrueNAS JBOD Servers​

1.​

One of the primary benefits of a TrueNAS JBOD server is its cost-effectiveness. Since JBOD doesn’t require complex RAID configurations, it saves money on expensive RAID hardware. Additionally, because each drive is independent, users have the flexibility to mix and match different sizes and types of drives, enabling gradual expansion.

2.​

TrueNAS’s web interface makes managing a JBOD server straightforward, even for users without extensive technical experience. The system’s powerful ZFS file system provides data protection, while the TrueNAS GUI simplifies drive management, making it easy to add, remove, or replace disks.

3.​

Although JBOD doesn't offer the same level of redundancy as RAID, using TrueNAS gives the system high availability through its built-in data protection mechanisms. For example, data can be replicated across multiple storage devices in a way that minimizes the risk of data loss. Even if a disk fails, the system can recover using TrueNAS's advanced features.

4.​

A JBOD configuration allows for seamless storage expansion. As your storage needs grow, you can add more drives to your TrueNAS server without significant downtime or disruption to the system. This makes JBOD an excellent option for businesses looking for an expandable solution that can evolve with their needs.

5.​

When configured with fast storage devices, such as SSDs, a TrueNAS JBOD server can provide excellent performance without the overhead of RAID systems. Users can enjoy high-speed data access while keeping costs low by utilizing individual drives.

Use Cases for TrueNAS JBOD Servers​

TrueNAS JBOD servers are ideal for various use cases, including:
  • Data Archiving: For companies needing to store vast amounts of data without requiring advanced redundancy features, TrueNAS JBOD offers a flexible and cost-effective solution.
  • Media Storage: For video editing, 3D modeling, or large media libraries, JBOD servers can store large files without the need for complex RAID setups.
  • Backup Storage: TrueNAS JBOD servers provide a simple solution for offsite or backup storage, especially when large capacity is needed without the requirement for immediate data protection or redundancy.
  • Virtualization: TrueNAS SCALE can also use JBOD for storing virtual machine images, containers, and other large datasets, especially in environments where high availability is handled at the application layer.

Potential Downsides of TrueNAS JBOD Servers​

While TrueNAS JBOD servers offer a lot of benefits, they also come with a few drawbacks to consider:
  • Lack of Redundancy: Since JBOD configurations don’t use RAID, data redundancy and fault tolerance are not inherent. If one drive fails, the data on that drive may be lost, unless additional backup strategies are employed.
  • Performance Variability: In a JBOD setup, performance can vary depending on the individual disks used. For example, if the disks are not of the same type or speed, the overall performance may be inconsistent.

TL:DR​

TrueNAS JBOD servers provide an effective and affordable storage solution for users who need to manage large volumes of data without the complexity and expense of traditional RAID systems. They offer scalability, data protection, and the flexibility of adding more disks over time. However, users must be mindful of the potential lack of redundancy and carefully manage their backups and data integrity.

For businesses or home users looking for cost-effective storage that scales with their needs, TrueNAS JBOD servers are a powerful option to consider. With TrueNAS’s user-friendly interface and advanced file system features, JBOD configurations can be highly efficient for a variety of use cases, from archiving and media storage to virtual environments.
 
TL;DR
TrueNAS JBOD servers provide an effective and affordable storage solution for users who need to manage large volumes of data without the complexity and expense of traditional RAID systems. They offer scalability, data protection, and the flexibility of adding more disks over time. However, users mus

Attachments

I can't find this in the documentation.
I'm converting several storage servers to a single, big, TrueNas. they all have duplicate servers, acting as backups. If there's a loss of data, disk crash, and I lose it all, I can copy from Backup over. It can be down for some time.
One of the servers is a JBOD array of 2TB, 3TB and 4TB disks. I want to move the disks, I know I'll have to copy the data over, into my truenas server, reducing the number of servers. I will then gradually buy the right disks and replace the jbod with a raid solution.
How do I create a JBOD in Truenas?
 
It is helpful if you use the correct terminology. We can guess for JBOD you mean "Striped Pool", with no redundancy. And for "raid solution", you mean ZFS RAID-Zx or Mirroring. Here is a guide on ZFS terminology;


Unfortunately that terminology primer does not mention Striped Pools.

If you can clarify what you mean by JBOD, that would be helpful. In some instances, JBOD implies the use of an external disk tray without any RAID controller, (so the disks appear as "Just a Bunch Of Disks").

Generally TrueNAS attempts to push the user into well supported choices, (aka pools with redundancy). I think you can use the Advanced option to make a Striped Pool.
 
Thanks for getting back to me. Sorry I wasn't clear. Currently I have a windows server with a volume that has been extended to include three 4TB drives, a 3TB drive, a 2TB drive and two 8 TB drives.
I'm going to replace them with a raid solution at a later date, as I gradulaly buy new 12 TB drives. I just want to move them into TrueNAS, with my first 12 TB drive to replace the older drives and remove a server to save power.

TrueNAS, (Core or SCALE), generally wants the pool's configuration decided at the start. Adding disks is problematic in the sense you can't convert a Striped Pool into a RAID-Zx pool, (without full backup and re-creation).

You can add a Mirror disk to a Striped disk, converting that single disk vDev into a Mirror vDev. If you have say 2 disks in a Striped Pool, you can add a Mirror disk to each, and convert the pool from no redundancy to 2-way Mirror redundancy. Some people think mirror pools are better because you can add 2 disks at a time to "grow your pool". Or replace 2 existing disks with larger ones to "grow your pool". But, you sacrifice 50% of your storage to Mirroring.

However, a RAID-Zx pool would require a new RAID-Zx vDev to "grow your pool". Or replace every disk in an existing RAID-Zx vDev to "grow your pool".

That's just a bit of background on initial design choices.

If you need to start with a single 12TB disk, you can certainly do so. In TrueNAS SCALE, (I don't have access to the Core GUI at the moment), I can;
Storage -> Create Pool -> Add Available Disks -> Striped, (default if one disk)
Check the "Force" option and accept the warning.
Create

That should create what you asked for, a striped disk pool.
 
Please ignore me a moment people. I'm just testing something out for a friend. See whether our forum has clout in SE's for this type of terminology. Not sure if it does.
 
With the introduction of SAS 12Gbps, seems like "it's time" to do a braindump on SAS.

Work in progress, as usual.

History

By the late '90's, SCSI and PATA were the dominant technologies to attach disks. Both were parallel bus multiple drop topologies and this kind of sucked. SATA and Serial Attached SCSI (SAS) evolved from those, using a serial bus and hub-and-spoke design.

Early SATA/150 and SATA/300 were a bit rough and had some issues, as did SAS 3Gbps. You probably want to avoid older controllers, cabling, expanders, etc. that do not support 6Gbps because some of it has "gotchas" in it. In particular a lot of it has 2TB size limitations. Most SAS 3Gbps hard drives are fine though.

Similarities, Differences, Interoperability

SAS and SATA operate at the same link speeds and use similar cabling. SAS normally operates at a higher differential voltage than SATA and can run over longer cabling.

SAS and SATA use different connectors on the drive. The SATA drive connector has a gap between the signal and power sections, which allows separate power and data cables to be easily connected. The SAS drive connector does not have a gap, and instead has a second set of pins on top. This second set of pins is the second (redundant) SAS port. There are pictures of the top and the bottom of the drive connector.

SATA drives can be attached to a SAS port. Electrically, the SAS port is designed to allow attachment of a SATA drive, and will automatically run at SATA-appropriate voltages. Physically, the SAS backplane connector has an area that will allow either the gapless SAS or the gapped SATA connector to fit. See picture of SAS backplane socket.

SAS drives are incompatible with SATA ports, however, and a SATA connector will not attach to an SAS drive. Don't try. The gap is there to block a SAS drive from being connected to typical SATA cabling, or to a SATA backplane socket.

When a SATA drive is attached to a SAS port, it is operated in a special mode using the Serial ATA Tunneling Protocol (STP).

SATA drives are inherently single-ported, meaning that they can only be attached to one thing at a time. SAS devices, however, are usually dual-ported. This means that, electrically, there are two ports on the single SAS connector. One is the primary and one is the secondary. The secondary port may be supported by a backplane or enclosure to allow the attachment of a second host, or to allow multiple paths back to a host for a high-availability configuration.

Some people use a special device called an interposer to take an inexpensive SATA drive and make it look like a nearline SAS drive (usually to get multipathing). Don't do this. They're crummy, just another thing to break.

The primary takeaway: You can connect SATA drives to an SAS port and it is expected to work. You cannot connect SAS drives to a SATA port. That absolutely won't work.

Cabling


As already noted, single SAS internal cables are virtually identical to SATA. The difference is that they can be longer. SATA is limited to 1 meter. It is therefore best to use cables less than 1 meter long if at all possible.

However, most SAS deployments involve larger numbers of disks, and SAS has some special connectors used to reduce wiring and aggregate lanes together.

For SAS 6Gbps, this is often the SFF8087 (internal, "Mini SAS") or SFF8088 (external). Four lanes gives you a total capacity of 24Gbps over a single SFF8087 connector. Some newer boards are using SFF8643 ("Mini SAS HD") for SAS 6Gbps.

For SAS 12Gbps, this is the SFF8643 (internal, "Mini SAS HD") and SFF8644 (external) connector. Again, four lanes gives you 48Gbps over a single SFF8643 connector. Now and then you will also see the SFF8643 used for 6Gbps, especially on dense small form factor mainboards, and it is also used for NVMe. Be warned.

A multilane connector may be broken into its four individual lanes using a breakout cable. For example, if you get an SAS HBA, it probably comes with one or two SFF8087's on it, but you may want to directly attach hard drives. A breakout cable allows this. This is a SFF8087-to-single-SAS breakout cable; this is a SFF8643-to-single-SAS breakout cable.

Also, in some scenarios, a mainboard may offer discrete SAS ports which you desire to aggregate into a multilane cable, and so reverse-breakout cables are available as well.

Internal connectors can be transformed into external connectors using an adapter plate. This allows you to create servers using storage in more than one chassis. This is "not for beginners" but the concepts aren't hard.

It is possible to mix 6Gbps and 12Gbps SAS. Just as with SATA, significant effort has been put into backwards compatibility.

SAS Ports

Some mainboards have SAS ports. These may be single-device ones that look like (and will work with) SATA, or they may have a multilane connector (SFF8087 or SFF8643). These usually work fine with FreeNAS if they are hooked up to something like an Intel PCH Storage Control Unit. Most of the rest of the time, you need to add an SAS Host Bus Adapter ("HBA"), which will typically give you eight lanes on two multilane connectors. Be aware that you should use an LSI HBA crossflashed to IT mode, which is discussed in this linked article in greater depth. Please do not try to use a RAID controller.

SAS Expanders

A SAS expander essentially takes a SAS multilane connection and allows the attachment of additional SAS devices. These devices all share in the available bandwidth of the SAS multilane connection. SAS expanders can be cascaded as well. In the following picture:
5596-1b172d0e552cd9a95a961ff9569c9ce8.gif
we see three SAS expanders. The first one only distributes to the second and third. The second and third each attach to hard disks. Modern expanders typically have enough channels that you wouldn't need to cascade them for just this small number of disks. A typical modern expander might have 36 lanes, allowing 24 disks, two upstream four lane host connections, and a downstream four lane connection to another expander.

There are advantages and disadvantages to expanders. A primary advantage is cabling simplicity: if you have a 24 drive chassis with a backplane that uses an expander, you need only a single SFF8087 to attach from the backplane to the HBA. The two main downsides are that those 24 drives then share the 24Gbps that's available on a SFF8087, and that in some cases some specific SATA disks have been known to not play nicely and have caused problems for other attached devices on a SAS expander.

As a matter of throughput, a typical modern hard drive can push 125-150MBytes/sec (that's about 1-1.25Gbps) so if you load up 24 disks * 1.25Gbps, you do exceed the 24Gbps that the multilane is capable of. This, however, assumes that you are doing sequential access to all drives simultaneously. That is unlikely at best.

The picture changes for SSD, and expanders may not be a good idea for use with large numbers of SSD's if you are expecting high throughput.

SAS expanders can come pre-installed on a backplane, or can be purchased as separate devices. The separate devices often come on what appears to be a PCIe card, but this is only to take advantage of mainboard power. An expander such as the Intel RES2SV240 may be attached anywhere convenient inside a chassis and powered via a Molex power plug. If you have a free PCIe slot, of course, that is a great place to put it too.

Supermicro backplanes (TQ, A, BE16, BE26)

Supermicro offers backplanes for many of their chassis in a variety of configurations.

The TQ option brings each individual bay out to an individual SAS connector. This is straightforward and nonthreatening to those who are unfamiliar with multilane. However, it is a bad idea to have twenty four individual cables to have to dig through if you suspect a bad cable, etc.

The A option is the best generalized option. It is the same as the TQ except that it brings groups of four bays out to a single SFF8087. The SFF8087 is a latching connector and is therefore substantially safer than the individual cables in the TQ. For a 24 drive chassis, then, there will be six SFF8087 connectors on the backplane. You must connect all of them to something, or the corresponding bays will be dead. You can attach them to three eight-port HBA's (such as three IBM ServeRAID M1015's) and this is a high performance configuration that allows full 6Gbps on all slots. You could also attach them to an SAS expander, but if so, why not just buy a backplane with an expander?

The BE16 (or 12Gbps BE1C) option brings out the attached bays as a single SFF8087. For a 12-drive SATA array, this is an ideal choice because there is no contention on the 24Gbps link and the cabling is stupid-simple. Very attractive option. For a 24-drive SATA array, I still think this is probably just fine because you're not likely to actually hit contention issues.

The BE26 (or 12Gbps BE2C) option adds a secondary expander onto the attached bays, making the SAS secondary ports available. This is useless on a SATA array, but if you're deploying SAS drives and you want the multipath capabilities, this is your beast.

External Shelves

External shelves fall into two general categories, ones with controllers and ones with expanders. Do not try to use one with a RAID controller built in. They'll just be problematic under ZFS. An external drive shelf that has an SAS expander in it, however, is very straightforward and may be attached in a manner similar to any other SAS expander.

Note that external shelves introduce a significant risk in the form of power catastrophes. If your shelf powers off but your server doesn't, this can be destructive to the pool.

Sidebands

SAS multilane cables may also include support for sideband signalling. This is a way for the backplane and the RAID controller, or mainboard, to indicate status, such as failed drive indication. This isn't generally useful in FreeNAS, which lacks software support for this murky and often arcane area of hardware design. For example, a RAID controller with sideband support and a compatible backplane can support features like "Identify Drive" or "Drive Fail" to identify a specific drive. In a reverse breakout cable scenario, four single SAS lanes from a mainboard plus an SGPIO header might connect to a single SFF8087. Discussed somewhat further at ftp://ftp.seagate.com/pub/sff/SFF-8448.PDF
 

Reply to the thread, titled "Understanding TrueNAS JBOD Servers: A Comprehensive Guide" which is posted in Computer and Networking Forum on Electricians Forums.

Best EV Chargers by Electrical2Go! The official electric vehicle charger supplier.

OFFICIAL SPONSORS

Electrical Goods - Electrical Tools - Brand Names Electrician Courses Green Electrical Goods PCB Way Electrical Goods - Electrical Tools - Brand Names Pushfit Wire Connectors Electric Underfloor Heating Electrician Courses
These Official Forum Sponsors May Provide Discounts to Regular Forum Members - If you would like to sponsor us then CLICK HERE and post a thread with who you are, and we'll send you some stats etc

Daily, weekly or monthly email

Back
Top