Quick onboard
Deployment
Data Modeling
Connection
Migration
Query
Operations and Maintenance
Common Maintenance
Partition
Backup and Restore
Expansion
Resource Groups
Monitoring
Performance Tuning
Troubleshooting
Reference Guide
Tool guide
Data Type
Storage Engine
Executor
Stream
DR (Disaster Recovery)
Configuration
Index
Extension
SQL Reference
Verify the cgroup version configured in your environment by checking the filesystem mounted by default during system boot:
stat -fc %T /sys/fs/cgroup/
For cgroup v1, the output is tmpfs. For cgroup v2, the output is cgroup2fs.
Note!
Onlycgroup v1is supported in the current version. To usecgroup v2, upgrade to YMatrix 6.4.x or later.
If your system currently uses cgroup v2, switch to cgroup v1 by running the following commands as root:
Red Hat 8 / Rocky 8 / Oracle 8:
grubby --update-kernel=/boot/vmlinuz-$(uname -r) --args="systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller"
Ubuntu:
vim /etc/default/grub
# add or modify: GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=0"
update-grub
Note!
You must be a superuser or havesudoprivileges to edit this file.
vi /etc/cgconfig.conf
Add the following content to the configuration file:
group gpdb {
perm {
task {
uid = mxadmin;
gid = mxadmin;
}
admin {
uid = mxadmin;
gid = mxadmin;
}
}
cpu {
}
cpuacct {
}
cpuset {
}
memory {
}
}
This configuration sets up CPU, CPU accounting, CPU core set, and memory control groups managed by the mxadmin user.
cgroup service on every node in the YMatrix cluster:cgconfigparser -l /etc/cgconfig.conf
systemctl enable cgconfig.service
systemctl start cgconfig.service
cgroup mount point on the node:grep cgroup /proc/mounts
cgroup directory mount point at /sys/fs/cgroup:tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
ls -l <cgroup_mount_point>/cpu/gpdb
ls -l <cgroup_mount_point>/cpuset/gpdb
ls -l <cgroup_mount_point>/memory/gpdb
If these directories exist and are owned by mxadmin:mxadmin, then cgroup has been successfully configured for YMatrix database resource management.