0001 .. SPDX-License-Identifier: GPL-2.0
0002
0003 ========
0004 ORANGEFS
0005 ========
0006
0007 OrangeFS is an LGPL userspace scale-out parallel storage system. It is ideal
0008 for large storage problems faced by HPC, BigData, Streaming Video,
0009 Genomics, Bioinformatics.
0010
0011 Orangefs, originally called PVFS, was first developed in 1993 by
0012 Walt Ligon and Eric Blumer as a parallel file system for Parallel
0013 Virtual Machine (PVM) as part of a NASA grant to study the I/O patterns
0014 of parallel programs.
0015
0016 Orangefs features include:
0017
0018 * Distributes file data among multiple file servers
0019 * Supports simultaneous access by multiple clients
0020 * Stores file data and metadata on servers using local file system
0021 and access methods
0022 * Userspace implementation is easy to install and maintain
0023 * Direct MPI support
0024 * Stateless
0025
0026
0027 Mailing List Archives
0028 =====================
0029
0030 http://lists.orangefs.org/pipermail/devel_lists.orangefs.org/
0031
0032
0033 Mailing List Submissions
0034 ========================
0035
0036 devel@lists.orangefs.org
0037
0038
0039 Documentation
0040 =============
0041
0042 http://www.orangefs.org/documentation/
0043
0044 Running ORANGEFS On a Single Server
0045 ===================================
0046
0047 OrangeFS is usually run in large installations with multiple servers and
0048 clients, but a complete filesystem can be run on a single machine for
0049 development and testing.
0050
0051 On Fedora, install orangefs and orangefs-server::
0052
0053 dnf -y install orangefs orangefs-server
0054
0055 There is an example server configuration file in
0056 /etc/orangefs/orangefs.conf. Change localhost to your hostname if
0057 necessary.
0058
0059 To generate a filesystem to run xfstests against, see below.
0060
0061 There is an example client configuration file in /etc/pvfs2tab. It is a
0062 single line. Uncomment it and change the hostname if necessary. This
0063 controls clients which use libpvfs2. This does not control the
0064 pvfs2-client-core.
0065
0066 Create the filesystem::
0067
0068 pvfs2-server -f /etc/orangefs/orangefs.conf
0069
0070 Start the server::
0071
0072 systemctl start orangefs-server
0073
0074 Test the server::
0075
0076 pvfs2-ping -m /pvfsmnt
0077
0078 Start the client. The module must be compiled in or loaded before this
0079 point::
0080
0081 systemctl start orangefs-client
0082
0083 Mount the filesystem::
0084
0085 mount -t pvfs2 tcp://localhost:3334/orangefs /pvfsmnt
0086
0087 Userspace Filesystem Source
0088 ===========================
0089
0090 http://www.orangefs.org/download
0091
0092 Orangefs versions prior to 2.9.3 would not be compatible with the
0093 upstream version of the kernel client.
0094
0095
0096 Building ORANGEFS on a Single Server
0097 ====================================
0098
0099 Where OrangeFS cannot be installed from distribution packages, it may be
0100 built from source.
0101
0102 You can omit --prefix if you don't care that things are sprinkled around
0103 in /usr/local. As of version 2.9.6, OrangeFS uses Berkeley DB by
0104 default, we will probably be changing the default to LMDB soon.
0105
0106 ::
0107
0108 ./configure --prefix=/opt/ofs --with-db-backend=lmdb --disable-usrint
0109
0110 make
0111
0112 make install
0113
0114 Create an orangefs config file by running pvfs2-genconfig and
0115 specifying a target config file. Pvfs2-genconfig will prompt you
0116 through. Generally it works fine to take the defaults, but you
0117 should use your server's hostname, rather than "localhost" when
0118 it comes to that question::
0119
0120 /opt/ofs/bin/pvfs2-genconfig /etc/pvfs2.conf
0121
0122 Create an /etc/pvfs2tab file (localhost is fine)::
0123
0124 echo tcp://localhost:3334/orangefs /pvfsmnt pvfs2 defaults,noauto 0 0 > \
0125 /etc/pvfs2tab
0126
0127 Create the mount point you specified in the tab file if needed::
0128
0129 mkdir /pvfsmnt
0130
0131 Bootstrap the server::
0132
0133 /opt/ofs/sbin/pvfs2-server -f /etc/pvfs2.conf
0134
0135 Start the server::
0136
0137 /opt/ofs/sbin/pvfs2-server /etc/pvfs2.conf
0138
0139 Now the server should be running. Pvfs2-ls is a simple
0140 test to verify that the server is running::
0141
0142 /opt/ofs/bin/pvfs2-ls /pvfsmnt
0143
0144 If stuff seems to be working, load the kernel module and
0145 turn on the client core::
0146
0147 /opt/ofs/sbin/pvfs2-client -p /opt/ofs/sbin/pvfs2-client-core
0148
0149 Mount your filesystem::
0150
0151 mount -t pvfs2 tcp://`hostname`:3334/orangefs /pvfsmnt
0152
0153
0154 Running xfstests
0155 ================
0156
0157 It is useful to use a scratch filesystem with xfstests. This can be
0158 done with only one server.
0159
0160 Make a second copy of the FileSystem section in the server configuration
0161 file, which is /etc/orangefs/orangefs.conf. Change the Name to scratch.
0162 Change the ID to something other than the ID of the first FileSystem
0163 section (2 is usually a good choice).
0164
0165 Then there are two FileSystem sections: orangefs and scratch.
0166
0167 This change should be made before creating the filesystem.
0168
0169 ::
0170
0171 pvfs2-server -f /etc/orangefs/orangefs.conf
0172
0173 To run xfstests, create /etc/xfsqa.config::
0174
0175 TEST_DIR=/orangefs
0176 TEST_DEV=tcp://localhost:3334/orangefs
0177 SCRATCH_MNT=/scratch
0178 SCRATCH_DEV=tcp://localhost:3334/scratch
0179
0180 Then xfstests can be run::
0181
0182 ./check -pvfs2
0183
0184
0185 Options
0186 =======
0187
0188 The following mount options are accepted:
0189
0190 acl
0191 Allow the use of Access Control Lists on files and directories.
0192
0193 intr
0194 Some operations between the kernel client and the user space
0195 filesystem can be interruptible, such as changes in debug levels
0196 and the setting of tunable parameters.
0197
0198 local_lock
0199 Enable posix locking from the perspective of "this" kernel. The
0200 default file_operations lock action is to return ENOSYS. Posix
0201 locking kicks in if the filesystem is mounted with -o local_lock.
0202 Distributed locking is being worked on for the future.
0203
0204
0205 Debugging
0206 =========
0207
0208 If you want the debug (GOSSIP) statements in a particular
0209 source file (inode.c for example) go to syslog::
0210
0211 echo inode > /sys/kernel/debug/orangefs/kernel-debug
0212
0213 No debugging (the default)::
0214
0215 echo none > /sys/kernel/debug/orangefs/kernel-debug
0216
0217 Debugging from several source files::
0218
0219 echo inode,dir > /sys/kernel/debug/orangefs/kernel-debug
0220
0221 All debugging::
0222
0223 echo all > /sys/kernel/debug/orangefs/kernel-debug
0224
0225 Get a list of all debugging keywords::
0226
0227 cat /sys/kernel/debug/orangefs/debug-help
0228
0229
0230 Protocol between Kernel Module and Userspace
0231 ============================================
0232
0233 Orangefs is a user space filesystem and an associated kernel module.
0234 We'll just refer to the user space part of Orangefs as "userspace"
0235 from here on out. Orangefs descends from PVFS, and userspace code
0236 still uses PVFS for function and variable names. Userspace typedefs
0237 many of the important structures. Function and variable names in
0238 the kernel module have been transitioned to "orangefs", and The Linux
0239 Coding Style avoids typedefs, so kernel module structures that
0240 correspond to userspace structures are not typedefed.
0241
0242 The kernel module implements a pseudo device that userspace
0243 can read from and write to. Userspace can also manipulate the
0244 kernel module through the pseudo device with ioctl.
0245
0246 The Bufmap
0247 ----------
0248
0249 At startup userspace allocates two page-size-aligned (posix_memalign)
0250 mlocked memory buffers, one is used for IO and one is used for readdir
0251 operations. The IO buffer is 41943040 bytes and the readdir buffer is
0252 4194304 bytes. Each buffer contains logical chunks, or partitions, and
0253 a pointer to each buffer is added to its own PVFS_dev_map_desc structure
0254 which also describes its total size, as well as the size and number of
0255 the partitions.
0256
0257 A pointer to the IO buffer's PVFS_dev_map_desc structure is sent to a
0258 mapping routine in the kernel module with an ioctl. The structure is
0259 copied from user space to kernel space with copy_from_user and is used
0260 to initialize the kernel module's "bufmap" (struct orangefs_bufmap), which
0261 then contains:
0262
0263 * refcnt
0264 - a reference counter
0265 * desc_size - PVFS2_BUFMAP_DEFAULT_DESC_SIZE (4194304) - the IO buffer's
0266 partition size, which represents the filesystem's block size and
0267 is used for s_blocksize in super blocks.
0268 * desc_count - PVFS2_BUFMAP_DEFAULT_DESC_COUNT (10) - the number of
0269 partitions in the IO buffer.
0270 * desc_shift - log2(desc_size), used for s_blocksize_bits in super blocks.
0271 * total_size - the total size of the IO buffer.
0272 * page_count - the number of 4096 byte pages in the IO buffer.
0273 * page_array - a pointer to ``page_count * (sizeof(struct page*))`` bytes
0274 of kcalloced memory. This memory is used as an array of pointers
0275 to each of the pages in the IO buffer through a call to get_user_pages.
0276 * desc_array - a pointer to ``desc_count * (sizeof(struct orangefs_bufmap_desc))``
0277 bytes of kcalloced memory. This memory is further intialized:
0278
0279 user_desc is the kernel's copy of the IO buffer's ORANGEFS_dev_map_desc
0280 structure. user_desc->ptr points to the IO buffer.
0281
0282 ::
0283
0284 pages_per_desc = bufmap->desc_size / PAGE_SIZE
0285 offset = 0
0286
0287 bufmap->desc_array[0].page_array = &bufmap->page_array[offset]
0288 bufmap->desc_array[0].array_count = pages_per_desc = 1024
0289 bufmap->desc_array[0].uaddr = (user_desc->ptr) + (0 * 1024 * 4096)
0290 offset += 1024
0291 .
0292 .
0293 .
0294 bufmap->desc_array[9].page_array = &bufmap->page_array[offset]
0295 bufmap->desc_array[9].array_count = pages_per_desc = 1024
0296 bufmap->desc_array[9].uaddr = (user_desc->ptr) +
0297 (9 * 1024 * 4096)
0298 offset += 1024
0299
0300 * buffer_index_array - a desc_count sized array of ints, used to
0301 indicate which of the IO buffer's partitions are available to use.
0302 * buffer_index_lock - a spinlock to protect buffer_index_array during update.
0303 * readdir_index_array - a five (ORANGEFS_READDIR_DEFAULT_DESC_COUNT) element
0304 int array used to indicate which of the readdir buffer's partitions are
0305 available to use.
0306 * readdir_index_lock - a spinlock to protect readdir_index_array during
0307 update.
0308
0309 Operations
0310 ----------
0311
0312 The kernel module builds an "op" (struct orangefs_kernel_op_s) when it
0313 needs to communicate with userspace. Part of the op contains the "upcall"
0314 which expresses the request to userspace. Part of the op eventually
0315 contains the "downcall" which expresses the results of the request.
0316
0317 The slab allocator is used to keep a cache of op structures handy.
0318
0319 At init time the kernel module defines and initializes a request list
0320 and an in_progress hash table to keep track of all the ops that are
0321 in flight at any given time.
0322
0323 Ops are stateful:
0324
0325 * unknown
0326 - op was just initialized
0327 * waiting
0328 - op is on request_list (upward bound)
0329 * inprogr
0330 - op is in progress (waiting for downcall)
0331 * serviced
0332 - op has matching downcall; ok
0333 * purged
0334 - op has to start a timer since client-core
0335 exited uncleanly before servicing op
0336 * given up
0337 - submitter has given up waiting for it
0338
0339 When some arbitrary userspace program needs to perform a
0340 filesystem operation on Orangefs (readdir, I/O, create, whatever)
0341 an op structure is initialized and tagged with a distinguishing ID
0342 number. The upcall part of the op is filled out, and the op is
0343 passed to the "service_operation" function.
0344
0345 Service_operation changes the op's state to "waiting", puts
0346 it on the request list, and signals the Orangefs file_operations.poll
0347 function through a wait queue. Userspace is polling the pseudo-device
0348 and thus becomes aware of the upcall request that needs to be read.
0349
0350 When the Orangefs file_operations.read function is triggered, the
0351 request list is searched for an op that seems ready-to-process.
0352 The op is removed from the request list. The tag from the op and
0353 the filled-out upcall struct are copy_to_user'ed back to userspace.
0354
0355 If any of these (and some additional protocol) copy_to_users fail,
0356 the op's state is set to "waiting" and the op is added back to
0357 the request list. Otherwise, the op's state is changed to "in progress",
0358 and the op is hashed on its tag and put onto the end of a list in the
0359 in_progress hash table at the index the tag hashed to.
0360
0361 When userspace has assembled the response to the upcall, it
0362 writes the response, which includes the distinguishing tag, back to
0363 the pseudo device in a series of io_vecs. This triggers the Orangefs
0364 file_operations.write_iter function to find the op with the associated
0365 tag and remove it from the in_progress hash table. As long as the op's
0366 state is not "canceled" or "given up", its state is set to "serviced".
0367 The file_operations.write_iter function returns to the waiting vfs,
0368 and back to service_operation through wait_for_matching_downcall.
0369
0370 Service operation returns to its caller with the op's downcall
0371 part (the response to the upcall) filled out.
0372
0373 The "client-core" is the bridge between the kernel module and
0374 userspace. The client-core is a daemon. The client-core has an
0375 associated watchdog daemon. If the client-core is ever signaled
0376 to die, the watchdog daemon restarts the client-core. Even though
0377 the client-core is restarted "right away", there is a period of
0378 time during such an event that the client-core is dead. A dead client-core
0379 can't be triggered by the Orangefs file_operations.poll function.
0380 Ops that pass through service_operation during a "dead spell" can timeout
0381 on the wait queue and one attempt is made to recycle them. Obviously,
0382 if the client-core stays dead too long, the arbitrary userspace processes
0383 trying to use Orangefs will be negatively affected. Waiting ops
0384 that can't be serviced will be removed from the request list and
0385 have their states set to "given up". In-progress ops that can't
0386 be serviced will be removed from the in_progress hash table and
0387 have their states set to "given up".
0388
0389 Readdir and I/O ops are atypical with respect to their payloads.
0390
0391 - readdir ops use the smaller of the two pre-allocated pre-partitioned
0392 memory buffers. The readdir buffer is only available to userspace.
0393 The kernel module obtains an index to a free partition before launching
0394 a readdir op. Userspace deposits the results into the indexed partition
0395 and then writes them to back to the pvfs device.
0396
0397 - io (read and write) ops use the larger of the two pre-allocated
0398 pre-partitioned memory buffers. The IO buffer is accessible from
0399 both userspace and the kernel module. The kernel module obtains an
0400 index to a free partition before launching an io op. The kernel module
0401 deposits write data into the indexed partition, to be consumed
0402 directly by userspace. Userspace deposits the results of read
0403 requests into the indexed partition, to be consumed directly
0404 by the kernel module.
0405
0406 Responses to kernel requests are all packaged in pvfs2_downcall_t
0407 structs. Besides a few other members, pvfs2_downcall_t contains a
0408 union of structs, each of which is associated with a particular
0409 response type.
0410
0411 The several members outside of the union are:
0412
0413 ``int32_t type``
0414 - type of operation.
0415 ``int32_t status``
0416 - return code for the operation.
0417 ``int64_t trailer_size``
0418 - 0 unless readdir operation.
0419 ``char *trailer_buf``
0420 - initialized to NULL, used during readdir operations.
0421
0422 The appropriate member inside the union is filled out for any
0423 particular response.
0424
0425 PVFS2_VFS_OP_FILE_IO
0426 fill a pvfs2_io_response_t
0427
0428 PVFS2_VFS_OP_LOOKUP
0429 fill a PVFS_object_kref
0430
0431 PVFS2_VFS_OP_CREATE
0432 fill a PVFS_object_kref
0433
0434 PVFS2_VFS_OP_SYMLINK
0435 fill a PVFS_object_kref
0436
0437 PVFS2_VFS_OP_GETATTR
0438 fill in a PVFS_sys_attr_s (tons of stuff the kernel doesn't need)
0439 fill in a string with the link target when the object is a symlink.
0440
0441 PVFS2_VFS_OP_MKDIR
0442 fill a PVFS_object_kref
0443
0444 PVFS2_VFS_OP_STATFS
0445 fill a pvfs2_statfs_response_t with useless info <g>. It is hard for
0446 us to know, in a timely fashion, these statistics about our
0447 distributed network filesystem.
0448
0449 PVFS2_VFS_OP_FS_MOUNT
0450 fill a pvfs2_fs_mount_response_t which is just like a PVFS_object_kref
0451 except its members are in a different order and "__pad1" is replaced
0452 with "id".
0453
0454 PVFS2_VFS_OP_GETXATTR
0455 fill a pvfs2_getxattr_response_t
0456
0457 PVFS2_VFS_OP_LISTXATTR
0458 fill a pvfs2_listxattr_response_t
0459
0460 PVFS2_VFS_OP_PARAM
0461 fill a pvfs2_param_response_t
0462
0463 PVFS2_VFS_OP_PERF_COUNT
0464 fill a pvfs2_perf_count_response_t
0465
0466 PVFS2_VFS_OP_FSKEY
0467 file a pvfs2_fs_key_response_t
0468
0469 PVFS2_VFS_OP_READDIR
0470 jamb everything needed to represent a pvfs2_readdir_response_t into
0471 the readdir buffer descriptor specified in the upcall.
0472
0473 Userspace uses writev() on /dev/pvfs2-req to pass responses to the requests
0474 made by the kernel side.
0475
0476 A buffer_list containing:
0477
0478 - a pointer to the prepared response to the request from the
0479 kernel (struct pvfs2_downcall_t).
0480 - and also, in the case of a readdir request, a pointer to a
0481 buffer containing descriptors for the objects in the target
0482 directory.
0483
0484 ... is sent to the function (PINT_dev_write_list) which performs
0485 the writev.
0486
0487 PINT_dev_write_list has a local iovec array: struct iovec io_array[10];
0488
0489 The first four elements of io_array are initialized like this for all
0490 responses::
0491
0492 io_array[0].iov_base = address of local variable "proto_ver" (int32_t)
0493 io_array[0].iov_len = sizeof(int32_t)
0494
0495 io_array[1].iov_base = address of global variable "pdev_magic" (int32_t)
0496 io_array[1].iov_len = sizeof(int32_t)
0497
0498 io_array[2].iov_base = address of parameter "tag" (PVFS_id_gen_t)
0499 io_array[2].iov_len = sizeof(int64_t)
0500
0501 io_array[3].iov_base = address of out_downcall member (pvfs2_downcall_t)
0502 of global variable vfs_request (vfs_request_t)
0503 io_array[3].iov_len = sizeof(pvfs2_downcall_t)
0504
0505 Readdir responses initialize the fifth element io_array like this::
0506
0507 io_array[4].iov_base = contents of member trailer_buf (char *)
0508 from out_downcall member of global variable
0509 vfs_request
0510 io_array[4].iov_len = contents of member trailer_size (PVFS_size)
0511 from out_downcall member of global variable
0512 vfs_request
0513
0514 Orangefs exploits the dcache in order to avoid sending redundant
0515 requests to userspace. We keep object inode attributes up-to-date with
0516 orangefs_inode_getattr. Orangefs_inode_getattr uses two arguments to
0517 help it decide whether or not to update an inode: "new" and "bypass".
0518 Orangefs keeps private data in an object's inode that includes a short
0519 timeout value, getattr_time, which allows any iteration of
0520 orangefs_inode_getattr to know how long it has been since the inode was
0521 updated. When the object is not new (new == 0) and the bypass flag is not
0522 set (bypass == 0) orangefs_inode_getattr returns without updating the inode
0523 if getattr_time has not timed out. Getattr_time is updated each time the
0524 inode is updated.
0525
0526 Creation of a new object (file, dir, sym-link) includes the evaluation of
0527 its pathname, resulting in a negative directory entry for the object.
0528 A new inode is allocated and associated with the dentry, turning it from
0529 a negative dentry into a "productive full member of society". Orangefs
0530 obtains the new inode from Linux with new_inode() and associates
0531 the inode with the dentry by sending the pair back to Linux with
0532 d_instantiate().
0533
0534 The evaluation of a pathname for an object resolves to its corresponding
0535 dentry. If there is no corresponding dentry, one is created for it in
0536 the dcache. Whenever a dentry is modified or verified Orangefs stores a
0537 short timeout value in the dentry's d_time, and the dentry will be trusted
0538 for that amount of time. Orangefs is a network filesystem, and objects
0539 can potentially change out-of-band with any particular Orangefs kernel module
0540 instance, so trusting a dentry is risky. The alternative to trusting
0541 dentries is to always obtain the needed information from userspace - at
0542 least a trip to the client-core, maybe to the servers. Obtaining information
0543 from a dentry is cheap, obtaining it from userspace is relatively expensive,
0544 hence the motivation to use the dentry when possible.
0545
0546 The timeout values d_time and getattr_time are jiffy based, and the
0547 code is designed to avoid the jiffy-wrap problem::
0548
0549 "In general, if the clock may have wrapped around more than once, there
0550 is no way to tell how much time has elapsed. However, if the times t1
0551 and t2 are known to be fairly close, we can reliably compute the
0552 difference in a way that takes into account the possibility that the
0553 clock may have wrapped between times."
0554
0555 from course notes by instructor Andy Wang
0556