Page Menu
Home
FreeBSD
Search
Configure Global Search
Log In
Files
F133140986
D40792.id123931.diff
No One
Temporary
Actions
View File
Edit File
Delete File
View Transforms
Subscribe
Mute Notifications
Flag For Later
Award Token
Size
3 KB
Referenced Files
None
Subscribers
None
D40792.id123931.diff
View Options
diff --git a/website/content/en/status/report-2023-04-2023-06/nvmf.adoc b/website/content/en/status/report-2023-04-2023-06/nvmf.adoc
new file mode 100644
--- /dev/null
+++ b/website/content/en/status/report-2023-04-2023-06/nvmf.adoc
@@ -0,0 +1,61 @@
+=== NVMe over Fabrics
+
+Links: +
+link:https://github.com/bsdjhb/freebsd/tree/nvmf2[nvmf2 branch] URL: https://github.com/bsdjhb/freebsd/tree/nvmf2
+
+Contact: John Baldwin <jhb@FreeBSD.org>
+
+NVMe over Fabrics enables communication with a storage device using
+the NVMe protocol over a network fabric. This is similar to using
+iSCSI to export a storage device over a network using SCSI commands.
+
+NVMe over Fabrics currently defines network transports for
+FibreChannel, RDMA, and TCP.
+
+The work in the nvmf2 branch includes a userland library (lib/libnvmf)
+which contains an abstraction for transports and an implementation of
+a TCP transport. It also includes changes to nvmecontrol to add
+'discover', 'connect', and 'disconnect' commands to manage connections
+to a remote controller.
+
+The branch also contains an in-kernel Fabrics implementation.
+nvmf_transport.ko contains a transport abstraction that sits in
+between the nvmf host (initiator in SCSI terms) and the individual
+transports. nvmf_tcp.ko contains an implementation of the TCP
+transport layer. nvmf.ko contains an NVMe over Fabrics host
+(initiator) which connects to a remote controller and exports remote
+namespaces as disk devices. Similar to the nvme(4) driver for NVMe
+over PCI-express, namespaces are exported via /dev/nvmeXnsY devices
+which only support simple operations, but are also exported as ndaX
+disk devices via CAM. Unlike nvme(4), nvmf(4) does not support the
+nvd(4) disk driver. Instead, it always uses a CAM SIM. nvmecontrol
+can be used with remote namespaces and remote controllers, e.g. to
+fetch log pages, identify info, etc.
+
+Note that nvmf(4) is currently a bit simple and some error cases are
+still a TODO. If an error occurs, the queues (and backing network
+connections) are dropped, but the devices stay around, but with I/O
+requests paused. nvmecontrol reconnect can be used to connect a new
+set of network connections to resume operation. Unlike iSCSI which
+uses a persistent daemon (iscsid) to reconnect after an error,
+reconnections must be done manually.
+
+The current code is very new and likely not robust. It is certainly
+not ready for production use. Experienced users who don't mind all
+their data vanishing in a puff of smoke after a kernel panic who have
+an interest in NVMe over Fabrics can start testing it at their own
+risk.
+
+The next main task is to implement a Fabrics controller (target in
+SCSI language). Probably a simple one in userland first followed by a
+"real" one that offloads the data handling to the kernel but is
+somewhat integrated with ctld(8) so that individual disk devices can
+be exported either via iSCSI or NVMe or both using a single config
+file and daemon to manage all of that. This may require a fair bit of
+refactoring in ctld to make it less iSCSI-specific however. Working
+on the controller side will also validate some of the currently
+under-tested API design decisions in the transport-independent layer.
+I think it probably does not make sense to merge this into the tree
+until after this step.
+
+Sponsored by: Chelsio Communications
\ No newline at end of file
File Metadata
Details
Attached
Mime Type
text/plain
Expires
Fri, Oct 24, 8:10 AM (15 h, 16 m)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
24125301
Default Alt Text
D40792.id123931.diff (3 KB)
Attached To
Mode
D40792: 2023Q2 status report for NVMe over Fabrics
Attached
Detach File
Event Timeline
Log In to Comment