You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description of problem:
Error logging related to user_xattr flag.
I wonder why these errors are occurring.
In the Rocky 8.10 environment, the following tests showed the same phenomenon.
Change the xfsprogs version and reconfigure the Brick file system. (xfsprog 5.0.0, 4.5.0, 5.13.0)
Restart the process after changing the key ClusterFS volume options. (features.acl: off, features.selinux: off, nfs.acl: disabled)
The exact command to reproduce the issue:
Proceed with the normal cluster volume default configuration. Do not specify any other options.
The full output of the command that failed:
```
[2024-12-31 06:03:45.075027 +0000] W [posix-inode-fd-ops.c:3881:posix_getxattr] 0-vol01-posix: Extended attributes not supported (try remounting brick with 'user_xattr' flag)
[2024-12-31 06:03:45.075076 +0000] E [MSGID: 113001] [posix-inode-fd-ops.c:3892:posix_getxattr] 0-vol01-posix: getxattr failed on /appdata/brick/ (path: /): system.nfs4_acl [Operation not supported]
```
Expected results:
No Error Logging
Mandatory info: - The output of the gluster volume info command:
Volume Name: vol01
Type: Replicate
Volume ID: f2245f73-dd3b-4d1f-9d49-bf2041bd6baf
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.21.181:/appdata/brick
Brick2: 192.168.21.182:/appdata/brick
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on
storage.health-check-interval: 0
**- The output of the `gluster volume status` command**:
Status of volume: vol01
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.21.181:/appdata/brick 49152 0 Y 12669
Brick 192.168.21.182:/appdata/brick 49153 0 Y 15483
Self-heal Daemon on localhost N/A N/A Y 12684
Self-heal Daemon on 192.168.21.182 N/A N/A Y 15498
Task Status of Volume vol01
------------------------------------------------------------------------------
There are no active volume tasks
- The output of the gluster volume heal command:
# gluster volume heal vol01 info
Brick 192.168.21.181:/appdata/brick
Status: Connected
Number of entries: 0
Brick 192.168.21.182:/appdata/brick
Status: Connected
Number of entries: 0
**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/
**- Is there any crash ? Provide the backtrace and coredump
no.
Additional info:
Brick Filesystem mount option /dev/sda3 on /appdata type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=512,noquota)
The default option is in use, and when using the user_xattr and noacl mount options, it is recognized as invalid parameters and is not mounted.
Description of problem:
Error logging related to user_xattr flag.
I wonder why these errors are occurring.
In the Rocky 8.10 environment, the following tests showed the same phenomenon.
The exact command to reproduce the issue:
Proceed with the normal cluster volume default configuration. Do not specify any other options.
The full output of the command that failed:
Expected results:
Mandatory info:
- The output of the
gluster volume info
command:- The output of the
gluster volume heal
command:**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/
**- Is there any crash ? Provide the backtrace and coredump
no.
Additional info:
Brick Filesystem mount option
/dev/sda3 on /appdata type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=512,noquota)
The default option is in use, and when using the user_xattr and noacl mount options, it is recognized as invalid parameters and is not mounted.
Brick Filesystem xfs_info result
- The operating system / glusterfs version:
Rocky Linux 8.10 / GlusterFS 10.3
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration
The text was updated successfully, but these errors were encountered: