Linux System Admin[Updated 2024] FAQ's

A sample set of a few Linux Administrator Questions & Answers are posted here. This is a mixed set of commonly asked administrator interview questions (specific to Red Hat/CentOS) for freshers and experienced staff. This includes some of the advanced Linux troubleshooting steps as well.


=======User Management Related=======

(1) Which file stores the user min UID, max UID, password expiration settings, password encryption method being used etc.?

ANS : /etc/login.defs

(2) How to make a file copied to a new user account automatically upon user account creation?

ANS :  Store the file in '/etc/skel' directory.

(3) What are the different fields in '/etc/passwd' file?

ANS :  UserName | Password | UserID | GroupID | Comments | HomeDir | LoginShell

< couple of lines from /etc/passwd file are pasted below for reference >

redhat:x:500:500:Redhat User:/home/redhat:/bin/bash

mssm:x:501:501:another user:/home/mssm:/bin/bash

- "x" in the password column indicates that the encrypted password is stored in '/etc/shadow' file.

(4) How to lock a user account?

ANS : This can be done by using either "usermod -L <UserName>" or "passwd -l <UserName>" commands.


# usermod -L mango

Once an account gets locked, there would be an exclamation mark before the encrypted password field in the file "/etc/shadow" as shown below:


To un-lock a account:-

# usermod -U mango

(5) How to disable user login via terminals?

ANS : Need to change the value "/bin/bash" to "/sbin/nologin" in the "/etc/passwd" file.

(6) Which commands are normally recommended to edit "/etc/passwd", "/etc/shadow", "/etc/group" and "/etc/gshadow" files?

ANS : vipw → To edit the user password file (/etc/passwd)
vigr → To edit the user group file(/etc/group)
vipw -s → To edit shadow password file (/etc/shadow)
vigr -s → To edit shadow group file (/etc/gshadow)

These commands would normally lock the respective file while editing to avoid corruption. NOTE: It is not a recommended practice to edit shadow file manually.

(7) Whenever a user tries to login via terminal, system would throw up an error message as "The account is currently not available", otherwise, via GUI when user enters password. Though it looks to be logging in, but, comes back to the login prompt. How could this issue be fixed?

ANS : This is because of the shell field set as "/sbin/nologin" in "/etc/passwd" file, so change this back to "/bin/bash" and user should be allowed to login.

If the shell field is set as "/bin/false" then whenever a user tries to login there would not be any error or messages, it just comes back to the login prompt and same happens in GUI mode.

(8) How do you make a new user to reset his password upon the first login?

{ The prompt should come up like below }

[redhat@localhost ~]$ su - mango
You are required to change your password immediately (root enforced)
Changing password for mango.
(current) UNIX password:

ANS : Use 'chage' command and set the expiration date as given below

[root@localhost skel]# chage -d 0 mango

<< To view password aging details >>

[root@localhost skel]# chage -l mango
Last password change : password must be changed
Password expires : password must be changed
Password inactive : password must be changed
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7

(9) How to change users home directory from '/home' to '/home1' (any new users created would get the home directory created under /home1)?

ANS : - Need to edit the file ‘/etc/default/useradd’.

- Change the 'HOME' line with the required directory as shown below :


- Save the changes and exit. After this any new users home directory would be under ‘/home1’ directory instead of '/home' which was the default.

- One could check useradd defaults using the command :
# useradd -D
# cat /etc/default/useradd

-- Also, we could create users and change their home directory to something other than defaults defined using the "-d" parameter while creating user as shown below (user name is "alldoctors" and home directory is /root/doctors ) :

[root@Redhat5Lvm ~]# useradd -d /root/doctors alldoctors
[root@Redhat5Lvm ~]# grep alldoctors /etc/passwd

[root@Redhat5Lvm ~]# ls -ali /root/doctor/
total 28
607780 drwx------ 3 alldoctors alldoctors 4096 Aug 3 00:31 .
543457 drwxr-x--- 18 root root 4096 Aug 3 00:31 ..
608253 -rw-r--r-- 1 alldoctors alldoctors 33 Aug 3 00:31 .bash_logout
608252 -rw-r--r-- 1 alldoctors alldoctors 176 Aug 3 00:31 .bash_profile
608251 -rw-r--r-- 1 alldoctors alldoctors 124 Aug 3 00:31 .bashrc
608248 drwxr-xr-x 4 alldoctors alldoctors 4096 Aug 3 00:31 .mozilla

(10) How to make/grant complete access (rwx) on files/directories created by a user and deny any level of access to others including group?

ANS : - Need to define the umask value for the required user. This can be done by editing the file ‘ .bash_profile’ under required users home directory.

For example, if it is required to define this for the user "mmurthy" then need to edit the file "/home/mmurthy/.bash_profile" and define umask as given below (assuming that the default home directory location is not changed):
umask 0077
- Save and exit the file.
- If the user is already logged in, then the user has to logout once and login back. So, any files or directories created by the specific user would get complete privileges only to the owner, the group and others would not get any privileges.

- For root user the umask is defined in "/etc/init.d/functions" file. Otherwise, in ‘/etc/profile’ (login shell) or ‘/etc/bashrc’  (non-login shell) file.

(11) How to check if a user account is locked?

ANS : Run the command "passwd -S <UserName>", this would show if the password has been locked or not. Otherwise, grep for the username from '/etc/shadow' file and one could see the "!" mark prefixed to the encrypted password field.

[root@server6 ~]# passwd -S smurthy
smurthy LK 1970-01-01 0 99999 7 -1 (Password locked.)

[root@server6 ~]# grep smurthy /etc/shadow

If there is a double exclamation mark here ("!!") this indicates that the account got locked-up by running the command "passwd -l <UserName>" command (available only for root user). Otherwise, a single exclamation mark indicates that the account got locked with the command "usermod -L <UserName>". Accounts locked with 'usermod' command would record such events in '/var/log/secure' file by default.

To "unlock" a user account, run this command "passwd -u <UserName">. Otherwise, run "usermod -U <UserName" command.

[root@server6 ~]# grep smurthy /etc/shadow

[root@server6 ~]# passwd -S smurthy
smurthy LK 1970-01-01 0 99999 7 -1 (Password locked.)

[root@server6 ~]# passwd -u smurthy
Unlocking password for user smurthy.
passwd: Success

[root@server6 ~]# grep smurthy /etc/shadow

[root@server6 ~]# passwd -S smurthy
smurthy PS 1970-01-01 0 99999 7 -1 (Password set, SHA512 crypt.)

(12) How to find out the shadow password encryption method (hashing algorithm) being used in Linux? How could this be changed (example : from MD5 to SHA512)?

ANS:-  One can find out the password encryption method being used for shadow passwords as shown below:

- Check in the file ' /etc/login.defs':

[root@server8 ~]# grep -i crypt /etc/login.defs    
# Use SHA512 to encrypt password.
- Check using "authconfig" command:
[root@server8 ~]# authconfig --test|grep hashing
password hashing algorithm is md5
Note: The authconfig package has been deprecated in RHEL8 on wards. Instead there is authselect available.
- Check the password beginning character in the second field of '/etc/shadow' file.
If it begins with = $6 →  indicates sha512
                 $5 →  indicates sha256
                         $1 → indicates md5    

To Change Password Encryption Method to SHA512:

# authconfig --passalgo=sha512 --update {this would change the password encryption method to SHA512}

Verify if it got changed successfully:
[root@server8 ~]# grep -i crypt /etc/login.defs
# Use SHA512 to encrypt password.

(13) What are the possible causes when a user failed to login into a Linux system (physical/remote console); despite providing proper credentials?

ANS : Here are the possible reasons why a user fails to login into console:

→ Account Locked
When a user tries to login via GUI receive an error "authentication failure" after entering password, and it goes back to the user login prompt.

In CLI mode, after entering user password, it would fail with an error "incorrect password". However, if a user tries to "su" from root account then it works.

→ Account Expired
When account expired, an error notifying about the same would be shown up.

→ Shell Disabled
After entering password in GUI, system shows a progress, however, it would come back to the login prompt. When this user attempts login via CLI, would receive an error "This account is currently not available".

→ Only Non-root Users Failed To Login
If all non-root users are unable to login via GUI/CLI, but only 'root' user is allowed to login then this could be because of the file "/etc/nologin" present in the system.

→ Only Non-root Users Failed To Login in CLI
If all non-root users are unable to login via CLI, however, can login via GUI then it would be because of "/tmp" space limitations. Need to check if "/tmp" is configured and mounted separately and check for free space under "/tmp".

→ User login failed from GUI or from text console, however, could do 'su' command
If a user failed to login from GUI/Console, however, could login from other user accounts by running 'su' then it could be due to PAM (Pluggable Authentication Modules) restrictions.

This is how it is replicated : One could use "pam_access" module to restrict login. Need to add the below line to files '/etc/pam.d/login' & '/etc/pam.d/gdm-*' :

account      required pam

After this add " - : <UserName> : ALL " to ‘/etc/security/access.conf’ file. For example to limit user "test", we could add below line to access.conf file;

- : test : ALL
Once this is done, whenever this user (test) tries to login there would an error "permission denied".

→ Only root user login failed from console, however, works in GUI
This could be because of no terminals available or defined in ‘/etc/securetty’ file.

<><> If a user is unable to login remotely via SSH then the reasons could be different. Here are the reasons in such cases:

→ User Restricted
If "AllowUsers" parameter is configured in ‘/etc/ssh/sshd_config’ file, then need to add required user to this list to get access.

→ Max Logins Set
If "maxlogins" parameter is set in ‘/etc/security/limits.conf’ then user would be allowed up to the parameter set and further connections would be denied. There could be "maxsyslogins" configured as well to limit concurrent access to a system.

(14) How to manually add a user without using "useradd/adduser" or "system-config-user" utilities?

ANS : First step is to create the required directory under '/home' (default home directory for all local users) and set proper permissions.

# mkdir /home/user1

[root@host1 mail]# ls -ld /home/user1
drwxr-xr-x. 2 root root 4096 Jan 24 07:19 /home/user1

At this stage, since the user "user1" is not yet defined, we could not change the home directory to be owned by "user1". This would be done later.

Now, edit the file ‘/etc/passwd’ to manually and set the required parameters for the new user "user1":

# vipw (this command would block multiple edits of /etc/passwd file)

user1:x:2000:2000:local user:/home/user1:/bin/bash

{ assign an un-used UID & GID }

[root@host1 ~]# grep user1 /etc/passwd
user1:x:2000:2000:local user:/home/user1:/bin/bash

Next steps is to create the required group by editing ‘/etc/group’ file using command 'vigr' :

# vigr

Once the group is created, next thing is to create local profile files for the new user by copying from ‘/etc/skel’ folder:

[root@host1 ~]# cp -arv /etc/skel/. /home/user1
`/etc/skel/./.bash_profile' -> `/home/user1/./.bash_profile'
`/etc/skel/./.bash_logout' -> `/home/user1/./.bash_logout'
`/etc/skel/./.mozilla' -> `/home/user1/./.mozilla'
`/etc/skel/./.mozilla/extensions' -> `/home/user1/./.mozilla/extensions'
`/etc/skel/./.mozilla/plugins' -> `/home/user1/./.mozilla/plugins'
`/etc/skel/./.gnome2' -> `/home/user1/./.gnome2'
`/etc/skel/./.bashrc' -> `/home/user1/./.bashrc'

Now, change permissions of all the files under '/home/user1' to be owned by new user as shown below :
# chown -R user1:user1 /home/user1
# chmod 0700 /home/user1

- Set a password for this user.
# echo "user1" | passwd --stdin user1

- Try logging in as the new user (user1) and test. You may also run 'umask' after login to verify if the umask value is set properly (by default umask value for non-root users is 0002).

For user mail requirement, need to create a proper file under ‘/var/spool/mail’ directory (default mailbox location) with username and permissions:

# cd /var/spool/mail
# touch user1
# chown user1:mail user1
# chmod 660 user1

=======Shell Scripting Related=======

(1) How to create a simple shell script file in Linux?

ANS : Make sure that the file begins with "#!/bin/bash" line. Make it executable by running the command "chmod +x <filename>". Later, the script file can be executed by running "sh <>" OR <./>" command.

(2) Which command could be used to check the default shell being used?

ANS : # echo $SHELL
# echo $0

Like-wise :

# echo $?
   → this shows the exit status of the most previous command executed in shell.

# echo $$
  → this shows current shell ID (when run inside a script this would print the PID assigned to the shell)

# echo $@ OR # echo $*
   → this prints the arguments passed when called for execution.

# echo $#
   → this would show up total number of arguments passed.

# echo $!
   → this would report PID of previous background process.

To check this, let's create a small script ( with contents as shown below :

echo -e "Print Current shell ID (\$$): $$"
echo -e "Arguments passed (\$@): $@"
echo -e "No of arguments passed (\$#): $#"
echo -e "This also prints arguments passed (\$*): $*"

Run the script file as shown below by passing arguments as '1 2 3' (whatever you may pass):
( NOTE: $@ would print each argument as a separate word, however, $* would print it as a single word)

[root@ansible-host tmp]# ./ 1 2 3
Print Current shell ID ($$): 107199
Arguments passed ($@): 1 2 3
No of arguments passed ($#): 3
This also prints arguments passed ($*): 1 2 3

=======Log File Related=======

(1) Where are the log files stored usually in Linux (default log file location)?

ANS : Usually under the ‘/var/log’ directory.

(2) How to check if the syslog service is running?

ANS : By running the command ‘/etc/init.d/rsyslog status’ OR ‘service rsyslog status’, otherwise, using "systemctl status rsyslog.service" (in RHEL7 and above) command.

(3) How to change the default log rotation frequency from weekly to something else?

ANS : Edit the file '/etc/logrotate.conf' and change below lines :

# rotate log files monthly

Save changes, and to apply the changes immediately (forcing) then run the below command :
# logrotate -f /etc/logrotate.conf

(4) How to check the boot messages (kernel ring buffer)?

ANS : Using the "dmesg" command or by viewing contents of the file ‘/var/log/dmesg’.

(5) How to increase the size of 'kernel ring buffer' (dmesg)?

ANS : By default the kernel ring buffer size is 512 bytes. So, to increase this space add "log_buf_len=4M" to the kernel stanza in 'grub.conf ' or 'grub.cfg' files as required.

(6) What are the purpose of the files ‘/var/log/wtmp’ & ‘/var/log/btmp’ and what do they store?

ANS : These files are used to store user login/logout details since from the date of creation.

The user login, logout, terminal type etc,. are stored in ‘/var/log/wtmp’ and this is not a user-readable file, so "last" command reads data from this file (or the file designated by the -f flag).

All unsuccessful(bad) login attempts are recorded in ‘/var/log/btmp’ which could be displayed using the command "lastb". All these login/logout events would also get recorded in ‘/var/log/secure’ file (this file usually stores all authentication/authorization events).

Likewise, there is ‘/var/log/lastlog’ which records most previous successful login event of users. In earlier RHEL versions (RHEL 5.x) there used to be a file ‘/var/log/faillog’ to hold failed login events which had become obsolete since RHEL6.1 and is no longer available.

=======Package/Yum Related=======

(1) What does 'ivh' represents in the command "rpm -ivh <PackageName>" ?

ANS : i - install
v - verbose mode
h - hash mode where it would print hash (#) characters as the installation progresses

(2) What is the difference between “rpm -F <PackageName>” and “rpm -U <PackageName>”?

ANS : The command "rpm -F" would basically freshens a package which in turn upgrades an existing package, otherwise doesn't install it if an earlier version not found. However, the command "rpm -U" would upgrade an existing package if exists otherwise install it.

(3) How to find to which package does the "ls" command belongs to (to find out package responsible for this command)?

ANS : First find out the absolute path where this binary or commands belongs to. Since we know that 'ls' is a binary command we could simply run 'which ls' to find out the path of this binary file. This would shows that '/bin/ls' is the absolute path of this binary file or command.

Now, run the command “rpm -qf /bin/ls” {this would tells about the package to which this command (binary file) belongs to if installed by that package}.

(4) How to find out the configuration files installed by a package (take into consideration of the "coreutils" package)?

ANS : # rpm -qc coreutils

To list out only the document files installed by coreutils package:
# rpm -qd coreutils

To list out all the files installed by this package:
# rpm -ql coreutils

# rpm -q --filesbypkg coreutils

To list out dependencies :
# rpm -qR coreutils

To list out packages which require this package:
# rpm -q --whatrequires coreutils

To find out more information of this package:
# rpm -qi coreutils

To find out any scripts executed by this package:
# rpm -q --scripts coreutils

Similarly, to find details of package which is not yet installed:

List Files In Package:
# rpm -qpl <PathOfPackageNotYetInstalled>

{The list would show up files which would get added to system after installing package}

List Only Config Files:
# rpm -qpc <PathOfPackageNotYetInstalled>

List Only Document Files:
# rpm -qpd <PathOfPackageNotYetInstalled>

List Out Dependancies For This Package:
# rpm -qpR <PathOfPackageNotYetInstalled>

List Details For This Package:
# rpm -qpi <PathOfPackageNotYetInstalled>

(5) How to find out the list of all packages installed on a RHEL system(server)?

ANS : By checking the file ‘/root/anaconda-ks.cfg’.  This would show the details of the deployment of a system including packages and package groups. Packages installed later would not be listed here.

Otherwise, run the command “rpm -qa”, this would query the rpm database and prints out names of all packages installed so far in a system.

Note: In RHEL6, need to install rpm-cron package, so this creates the file '/var/log/rpmpkgs' which logs package activities.

(6) How to create a local yum repository which would make use of the mounted Linux ISO image under '/media' mount point?

ANS : Create a file ending with .repo extension under '/etc/yum.repos.d' directory with proper syntax as shown below:

[root@localhost yum.repos.d]# cat local.repo

(7) What are the different ways that can be used to verify that a package got installed successfully by yum/dnf command?

ANS : Method 1: Check exit status of yum command

Immediately after running the ‘yum/dnf’ command, check exit status, if it shows "0" (numeral) then command executed successfully.
[root@localhost yum.repos.d]# echo $?

Method 2 : Query the package installed

Run rpm -qa and test.

[root@localhost yum.repos.d]# rpm -qa | grep certmonger

Method 3 : Verify package integrity

Verify using the 'rpm' command as shown below (if no output means package verified successfully) :

[root@localhost yum.repos.d]# rpm -V certmonger

Method 4 : Verify in 'yum/dnf' log file

Check the '/var/log/yum.log' file to see the successful log entry about the same package.

[root@localhost yum.repos.d]# grep certmonger /var/log/yum.log
Jul 15 10:33:22 Installed: certmonger-0.61-3.el6.x86_64

Note: The log files of DNF are stored separately for hawkey, librepo and dnf. These files could be found under /var/log/ folder. More details visit this page:
(8) How to view the installed date of a package (consider the package sg3_utils)?

ANS : Check in '/var/log/yum.log' or '/var/log/dnf.log' file (provided the package is installed by yum/dnf):

[root@server8 ~]# grep sg3_utils /var/log/yum.log
Oct 22 12:11:38 Installed: sg3_utils-1.28-4.el6.x86_64

Use the command "rpm -q <PackageName> --last"

[root@server8 ~]# rpm -q sg3_utils --last
sg3_utils-1.28-4.el6.x86_64                Wed 22 Oct 2014 12:11:37 PM PDT

Using the command: rpm -qi <PackageName> | grep "Install Date"

[root@server8 ~]# rpm -qi sg3_utils|grep "Install Date"
Install Date: Wed 22 Oct 2014 12:11:37 PM PDT   Build Host:

(9) If for some reason, a binary file gets corrupted or missing from system, then how could this be recovered with minimal downtime?

ANS : In first attempt, one could try to copy the missing binary (executable) file from a similar working system using SCP (Secure CoPy) command.

If the above attempt is not possible or doesn't work, then the binary file could be extracted from the respective package and move it to the desired path.

Consider the situation wherein the binary command file ‘/sbin/ifconfig’ is missing or corrupted, hence, unable to run this command. So, let's find out how to fix this.

Steps :

- Identify to which package this command belongs to.

- On a working system, run the command 'rpm -qf /sbin/ifconfig'. This would show which package has installed this executable file. The below line confirms that this binary belongs to the "net-tools" package:

[root@server]# rpm -qf /sbin/ifconfig

- Mount an ISO image file which holds this package or download the required package using "yumdownloader" command (yumdownlader command belongs to yum-utils package).

- Check if the required file is available in the package before extracting it using "rpm2cpio" command as shown below (in this example I've downloaded net-tools binary rpm into the root directory) :

[root@server]# rpm2cpio /net-tools-1.60-110.el6_2.x86_64.rpm | cpio --extract --list --verbose "*ifconfig"
-rwxr-xr-x   1 root root        69440 Apr 26 2012 ./sbin/ifconfig
1542 blocks

- The above message confirms that the binary file (/sbin/ifconfig) is available in the package, hence, it needs to be extracted.

- Create a directory where to extract the binary and then run the below command :

[root@server]# rpm2cpio /net-tools-1.60-110.el6_2.x86_64.rpm | cpio --extract --make-directories --verbose "*ifconfig"
1542 blocks

- The binary would be found under 'sbin' directory in current directory.

[root@server]# tree
└── sbin
   └── ifconfig

1 directory, 1 file

- Later, move this binary file to '/sbin' folder and make sure proper permissions are set as required. Once this is done, the 'ifconfig' command works as it was before.

=======File System/LVM Related=======

(1) How to check what file systems are mounted and their read/write status?

ANS : One could simply verify the status of mounted file system by running either of the below commands:

# cat /proc/mounts
# mount
# df -Th { this would show the file system usage but not read/write status }

(2) Where is 'grub.conf' or 'grub.cfg' file stored in RHEL systems?

ANS : In ‘/boot/grub’ OR ‘/boot/grub2’ (in case of RHEL7 and above) directory.

(3) How do you remount a file system read only on the fly?

ANS : By using the ‘mount’ command as shown below:

# mount -o remount,ro <Mountpoint>

- In case it is required to make a file system to boot up with “read-only” status then need to edit ‘/etc/fstab’ file appropriately.

Note: Mounting a file system as read-only makes writing data not possible, so take necessary downtime if required before doing so.

(4) What command could be used used to convert EXT2 file system into EXT3?

ANS : File systems from ‘EXT2’ into ‘EXT3’ could be converted using the “tune2fs” command. (syntax: tune2fs -j <device or file system name>). However, Red Hat doesn’t recommend converting from ‘EXT3’ to ‘EXT4’. The only way is to backup EXT3 and restore on newly created EXT4 block device.

The “j” flag symbolizes journal feature which gets added to respective file system.

(5) How to run file system check on a logical volume in rescue mode for EXT based file systems?

ANS : Boot up the system using an ISO boot image file or a disc. At the boot prompt type “linux rescue nomount" (without quotes) and hit Enter key to boot into rescue environment. On newer versions (RHEL7.x and above), need to select 'Troubleshooting' and then select 'Rescue Red Hat Enterprise Linux' option to go to Rescue environment (otherwise, press Esc key and type 'linux rescue nomount' at the boot prompt).

- First make the logical volumes available by running these commands:

# lvm pvscan
# lvm vgscan
# lvm lvscan
# lvm lvchange -ay

- Next, run the file system check on the respective lvm as show below:

# e2fsck -fy </dev/vgname/lvname>

(6) How to reduce/extend a root file system mounted on a logical volume?

ANS : To Reduce :

→ Boot into rescue mode as explained before.
→ Activate the logical volumes (lvm).
→ Run file system check on respective lvm.
→ Reduce file system first by using “resize2fs” command as shown ( supposed to reduce the root file system to the total size of 5GB, here the root volume group name is ‘vg1’ and logical volume name is ‘rootlv’):

# resize2fs /dev/vg1/rootlv 5G

→ Next, reduce the corresponding logical volume using either “lvreduce” or “lvresize” commands:

# lvreduce -L 5G /dev/vg1/rootlv

→ Run fsck again.
→ Verify if the lvm is showing the reduced size.

To extend :

→ No need to boot into rescue, this could be done online.
→ Un-mount the respective file system first (this is not absolutely necessary, size can be extended online, but always recommended to un-mount respective file system)
→ Extend the lvm to the required new size or by specifying how much size to be increased as shown by using the “lvextend” or "lvresize" command:

# lvextend -L +1G /dev/vg1/rootlv (extending the size to 1GB plus)

→ Extend the file system :

# resize2fs /dev/vg1/rootlv

→ Run fsck if necessary.

NOTE: In case of RHEL7 and above where XFS is the default file system being used, it doesn't allow shrinking of file system, hence, reducing file system is not possible as of now.

(7) How to verify if a filesystem state is marked as clean?

ANS : This can be done by using “dumpe2fs” or “tune2fs” commands as shown below:


[root@redhat Desktop]# dumpe2fs -h /dev/sda1 |grep -i state
dumpe2fs 1.41.12 (17-May-2010)
Filesystem state: clean


[root@server8 Desktop]# tune2fs -l /dev/sda1|grep -i state
Filesystem state:      clean

(8) How to find out Backup Superblocks for a logical volume?

ANS : One could use either “dumpe2fs” or “mke2fs” commands as shown in the below example :

[root@redhat]# dumpe2fs /dev/vg1/rootlv | grep -i "backup superblock"
dumpe2fs 1.41.12 (17-May-2010)
Backup superblock at 32768, Group descriptors at 32769-32769
Backup superblock at 98304, Group descriptors at 98305-98305
Backup superblock at 163840, Group descriptors at 163841-163841
Backup superblock at 229376, Group descriptors at 229377-229377
Backup superblock at 294912, Group descriptors at 294913-294913
Backup superblock at 819200, Group descriptors at 819201-819201
Backup superblock at 884736, Group descriptors at 884737-884737


If the file system is un-mounted then the “mke2fs” command could be used as shown here executed on logical volume ‘rootlv’ which is from the volume group ‘vg1’ :

# mke2fs -n /dev/vg1/rootlv | grep -i -A1 "superblock backup"

(9) How to find out list of actual devices associated with a logical volume using lvs command?

ANS : [root@redhat]# lvs -o +segtype,devices

LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Type Devices
homelv vg1 -wi-ao---- 1000.00m linear /dev/sda2(0)
rootlv vg1 -wi-ao---- 5.49g linear /dev/sda2(1000)
swaplv vg1 -wi-ao---- 1000.00m linear /dev/sda2(2500)
tmplv vg1 -wi-ao---- 1000.00m linear /dev/sda2(4000)
usrlv vg1 -wi-ao---- 6.37g linear /dev/sda2(2750)
usrlv vg1 -wi-ao---- 6.37g linear /dev/sda2(4250)
varlv vg1 -wi-ao---- 2.93g linear /dev/sda2(250)
testlv vg2 -wi-ao---- 52.00m linear /dev/sdb(0)

Using vgdisplay command as shown below :

# vgdisplay -v vg1   {this would list out all the details of the volume group ‘vg1’ including corresponding lvs and pvs }

--- Physical volumes ---
PV Name /dev/sdb
PV UUID nFwUZd-F4Cm-tbeq-y75d-OUzX-S2ze-rEeR8x
PV Status allocatable
Total PE / Free PE 25 / 0

PV Name /dev/sdc
PV UUID KWfeHu-cP6u-Qpdz-XzDD-iomY-uSKd-2d6V6G
PV Status allocatable
Total PE / Free PE 25 / 24

- Check the latest lvm archive within ‘/etc/lvm/archive/’ directory as demonstrated below:

[root@redhat ]# grep device /etc/lvm/archive/
device = "/dev/sdb" # Hint only
device = "/dev/sdc" # Hint only

(10) How to set "rw" permissions on file for a user and disable for other users except root user
(exclusive permissions)?

ANS : Use the “setfacl” command to get this done as shown below:

# setfacl -m u:<UserName>:<PermissionBits> <File/FolderPath>

Example: To grant ‘rw’ permissions on a file ‘/testfile’ for a user “redhat”:

# setfacl -m u:redhat:rw /testfile

To read/view on what special permission bits are set, we could use "getfacl" command:

# getfacl /testfile

(11) What are the different fields in the file ‘/etc/fstab’?

ANS : DeviceName | MountPoint | FilesystemType | MountOptions | DumpFrequency | FsckCheckOrder

DeviceName : This field denotes actual block device being used or a label or an UUID.

MountPoint : The reference mount point being used to access a block device.

FilesystemType : This specifies file system being used on a block device.

MountOptions : These are the options being used to mount specific file system and normally used options are ‘default, ro, user etc.,’.

DumpFrequency : If the dump backup utility is used, the number in this field controls dump's handling of the specified file system.

FsckCheckOrder : Controls order in which file system check has to be performed, usually root file system would hold a value of ‘1’, other file systems would hold ‘2’ and ‘0’ would be used if file system check has to be disabled on boot up. For XFS file systems this is always set to '0'.

(12) How do you skip the initial fsck (file system check) on a file system while booting up?

ANS : Edit the file ‘/etc/fstab’ and make the last column of the respective file system as 0 (number). This would skip the file system check process.

The initial file system check is not performed in case of XFS based file systems.

(13) How to list all the files with SUID (Set User ID) bit set under the top level root directory and ignore any errors/warnings in the process, and list the output in long list format?

ANS : This can be done by using the "find" command :
# find / -type f -perm -4000 2> /dev/null | xargs ls -l

(14) How to list all the files/folders with SUID/SGID/Sticky Bit (Set Group ID) bit set under the top level root directory and ignore any errors/warnings in the process, and list the output in long list format?

ANS : Use the “find” command to get this done as shown here :

# find / -type f -perm /7000 2> /dev/null | xargs ls -l

(15) How to find out all orphaned files and directories in the system (which are not owned by anyone else)?

ANS : This can be done by using the "find" command as shown below:

# find / -nouser -o -nogroup 2> /dev/null

(16) How to force file system check to run after random/maximum mount counts in EXT based systems?

ANS : One option is to use "tune2fs" (in Ext file systems) command to set this. There are two options "Maximum mount count:" & "Mount count:" which could be used to control after how-many mounts the file system check should be run.

By default the "Maximum mount count:" option would be set "-1" when file system gets created which mask this feature. So, to get a file system check (fsck) on system reboot/reset this has to be set and which in turn depends on "Mount count". These mount counts would get incremented after each system reset/restart.

Default option:

[root@server yum.repos.d]# tune2fs -l /dev/sda2 |grep -i "mount count"
Mount count: 9
Maximum mount count: -1

So, to force 'fsck' on count reach of 10, we can tune-up the file system as shown below:

[root@server yum.repos.d]# tune2fs -c 10 /dev/sda2
tune2fs 1.41.12 (17-May-2010)
Setting maximal mount count to 10

[root@server yum.repos.d]# tune2fs -l /dev/sda2 |grep -i "mount count"
Mount count: 9
Maximum mount count: 10

Now, after the 10th mount of the file system, it would force a file system check and mount counter would be reset to 1.

Otherwise, there is another global option which could be used to make file system check after 180 days or after random number of mounts using "enable_periodic_fsck" in ‘/etc/mk2fs.conf’ file as shown below:

[root@server yum.repos.d]# grep -i enable_periodic_fsck /etc/mke2fs.conf
enable_periodic_fsck = 1

NOTE : To get this the ‘e2fsprogs’ package should be at least "e2fsprogs-1.41.12-20.el6" version. Hence, need to update older package.  This has been added from RHEL 6.6 on-wards.


(17) How to search for all files with extension "*.log" in the current working directory and find out total disk space consumed and skip such files under any sub-directories?

ANS : There are situations wherein an admin would be required to find out total disk space consumed by some files such as "*.log" or "*.dat" etc., so one could use this command:

[root@ data]# find . -maxdepth 1 -name '*.log'  | xargs ls -l | awk '{ TOTAL += $5} END { print TOTAL }'

[root@ data]# find . -maxdepth 1 -name '*.log' -type f -exec du -bc {} + | grep total | cut -f1

If there are smaller files then running the 'find' command or 'du' command would work, however, if there are bigger files then one may come across error "argument is too long", so need to use "xargs" to parse output to avoid such errors. Ref :

$ find . -maxdepth 1 -name '*.dat' | xargs ls -l | awk '{ TOTAL += $5} END { print TOTAL }'

(18) What are the major differences between RHEL6 vs RHEL7 vs RHEL8 vs RHEL9?

ANS : The default file system being used in RHEL7 on wards is XFS and for more details visit the links below:

(19) What are the differences between hard & soft links in Linux file system?

ANS : -                      
Hard Links                                     Soft Links

Gets created using same I-node number with a different name.

Gets created using alias name referring the original file name, but uses different I-node.  

Can only be created within same file system.

Can be created across file systems.

Remains even if the original file is removed.

Dies after the original file is removed, otherwise, exists as dead link.

Can't be created for directories.

Can be used to create links to directories.

=======Boot/Kernel Related=======

(1) After installing latest kernel package the system is still booting from an older kernel. How to make the system to boot from the newly installed kernel?

ANS : - Verify if the new kernel package was successfully installed.

- Verify if the kernel stanza is added in ‘grub.conf’ or ‘grub.cfg’ (in case of RHEL7 and above ) file.

- Make the new kernel as the default kernel. In case of RHEL7 and above, run the command ‘cat /boot/grub2/grubenv’ and check if “saved_entry” is marked either as 0 (numerical zero) or with new kernel version, otherwise, run the command “grub2-set-default 0” to make the newly installed kernel as the default kernel and then reboot. In earlier versions, need to edit ‘/boot/grub/grub.conf’ file and make sure that the entry “default” is set to 0 always, otherwise, change this line and reboot.

(2) There is an error message "cannot resolve label for a file system" on boot and system drops into single user mode with 'Ctrl+D error'. How could this be fixed?

ANS : Here are the steps to resolve it:

-- Make a note of the file system for which the label failed to resolve.

-- At the “Ctrl+D” error screen user needs to enter root password to get into single user mode.

-- Then remount root file system in 'rw' (read/write) mode:

# mount -o remount,rw /

-- Use "blkid" or "e2label" or "findfs" or "lsblk -f" or check in '/dev/disk/by-uuid/' or in '/dev/disk/by-label/' to find out the labels/UUIDs assigned to each mounted devices. If the label is not correct in ‘/etc/fstab’ file then make changes as required.

-- Exit and reboot. this should fix the label error and system should boot normally.

(3) How to avoid user accessing grub menu and passing on parameter while booting?

ANS : To avoid such things, one has to set ‘grub’ password to lock unauthorized person accessing and passing parameters to kernel while booting.

Use the command “grub2-set-password” which generates (only root user is allowed) the file ‘/boot/grub2/users.cfg’ which stores the encrypted password which gets verified when someone interrupts boot loader while booting and users is allowed after successful password verification.
For RHEL6.x :- A grub password can be set by using the command "grub-crypt". This generates an encrypted password using "SHA512" algorithm. The "grub-md5-crypt" command could also be used to generate "MD5" encrypted password.

Later, paste the generated encrypted password into ‘/boot/grub/grub.conf’ file as shown below (only a part of grub.conf file is pasted below):

password --encrypted $1$EgrNz1$MbdclVToRCCsOF7OuBEgb/
title Red Hat Enterprise Linux (2.6.32-431.el6.x86_64)

In RHEL 5.x :- Use "grub-md5-crypt" command to generate password and could use "password --md5 <password-hash>" format in grub.conf file.

(4) How to disable “NetworkManager” service/daemon getting loaded in ‘’ or ‘’ in RHEL7 and above?

ANS : Disable the service as shown below:

# systemctl disable NetworkManager.service

To disable and stop the service: # systemctl disable --now NetworkManager.service

For RHEL6 & lower releases:- Use the command “chkconfig” to disable or stop any services from getting loaded in a particular runlevel :

[root@redhat]# chkconfig --level 5 NetworkManager off

(5) Which is the parameter that is required to be added to ‘grub.conf’ or ‘grub.cfg’ while configuring kdump?

ANS : It is required to add the key word “crashkernel=XXX” to the grub configuration file. In most cases, adding the parameter as “crashkernel=auto” would work, however, it may not work well in RHEL6 and below version, in such cases one has to set this value as per the Physical Memory (RAM) available on the system.

For more details on configuring kdump and its configurations, please check this link:

(6) How can I reboot quickly into another kernel (if available) by-passing BIOS process?

ANS : This is possible provided the ‘kexec-tools’ package is installed on the system.

Let’s say that there are two kernel images installed on my system as shown below:

- kernel-2.6.32-642.4.2.el6.x86_64 &
- kernel-2.6.32-642.el6.x86_64

And running kernel is "2.6.32-642.4.2.el6.x86_64". So, to boot quickly into older kernel and by-passing BIOS process, need to run the below commands:

# kexec -l /boot/vmlinuz-2.6.32-642.el6.x86_64 --initrd=/boot/initramfs-2.6.32-642.el6.x86_64.img --command-line="$(cat /proc/cmdline)"

So, what the above command does is:

“ kexec -l /boot/vmlinuz-2.6.32-642.el6.x86_64 ” ---  this specifies the which kernel image to load
“ --initrd=/boot/initramfs-2.6.32-642.el6.x86_64.img ”  --- this specifies which initird image to be loaded
“--command-line="$(cat /proc/cmdline)” ” --- this shows the  command-line parameters passed.
Once the above command is executed successfully, run the command "kexec -e" to reboot quickly into older kernel and by passing the BIOS process.

(7) What are the different stages involved in the boot process a Linux system?

ANS : Please refer the below link to understand the boot process:

=======System/Hardware Related=======

(1) Some of the commonly used commands to gather information on system hardware and other details from a Linux system are listed below:

→ To check memory availability
# free -m
# cat /proc/meminfo
# top
# vmstat
# dmidecode --type memory (would show hardware specs of RAM)

→ To check CPU details ←
# cat /proc/cpuinfo
# lscpu
# dmidecode --type processor (would show hardware specs of CPU)

→ To check the loaded modules ←
# lsmod

→ To load a module ←
# modprobe <ModuleName>

→ To check all (active/inactive) network interfaces ←
# ifconfig -a
# ip a
# cat /proc/net/dev

→ To scan SCSI bus so that all newly added devices/luns would come up ←
# echo "- - -" > /sys/class/scsi_host/host<ID>/scan

{to get the command, need to install 'sg3_utils' package}

→ To check most recent system reboot ←
# last reboot | head -1
{last command reads from ‘/var/log/wtmp’ file and ‘lastb’ reads from ‘/var/log/btmp’ files respectively}

[root@redhat Desktop]# last reboot | head -1
reboot system boot 2.6.32-431.el6.x Thu Jul 17 15:43 - 17:09 (01:25)

→ To check the most recent system shutdown time ←
# last -x|grep shutdown | head -1

[root@redhat Desktop]# last -x | grep shutdown|head -1
shutdown system down 2.6.32-431.el6.x Thu Jul 17 15:43 - 15:43 (00:00)

→ To check processor statistics ←
# mpstat
# iostat {these commands belongs to sysstat package}

(2) Common Standard Ports Being Used :


- 21/20 FTP
- 22 SSH
- 23 telnet
- 25 SMTP
- 53 DNS (TCP/UDP)
- 68 DHCP
- 69 TFTP
- 80/443 http/https (TCP)
- 88/464 Kerberos (TCP/UDP)
- 110 POP3
- 123 NTP(UDP)
- 137 nmbd
- 138,139,445 smbd
- 143 IMAP
- 161 SNMP
- 389/636 LDAP/LDAPS (TCP)
- 514 syslogd (UDP)
- 2049 NFS

(3) How to find out the system hardware details such as manufacture, product name, BIOS version, system serial number etc.?

ANS : This can be found using various methods, however, most commonly used command is "dmidecode".

# dmidecode --type system |egrep -i "Manufacturer|Product Name|Serial Number|Family"

# dmidecode --type system |grep "System Information" -A 8

- To find out BIOS details :
# dmidecode --type bios |grep "BIOS Information" -A 6

Please visit my blog page for more details and easier way to get the hardware information:

(4) The option "Open in Terminal" is missing when a user right clicks on terminal in GUI. How to fix this?

ANS : This is basically because of missing package "nautilus-open-terminal". Once this is installed, the right click option would show up.

(5) How to change the default display manager or desktop from Gnome Display Manager (GDM) to KDE?

ANS : You would need to install the KDE related packages first. Run command 'yum groupinstall "KDE Desktop" ' to get all the KDE related packages to be installed (provided yum is configured). Once this is done, create the file "/etc/sysconfig/desktop" and add the below lines:


After this change, restart the X window session.

In RHEL 5.x, you could use "switchdesk" command to switch to different display managers. So, to switch to KDE, you could use the command "switchdesk kde". This may prompt to install "KDE Software Development" group if not installed.

Implementation and use of KDE has been deprecated in RHEL 8 on-wards.

(6) How to run 'free' command to print output of 2 instances with 2 seconds interval and store that output in a file (skipping any errors/warnings), and run this in background?

ANS : This can be done using the command “free -s 2 -c 2 1> /tmp/free.out 2> /dev/null &” as shown in the below snap:

(7) How to find out when was the last time a service got restarted?

ANS : One way is to check in respective configured log file, otherwise, we could find the process start time using "ps" command.

For example, to find out the restart occurrences of "SSHD" then check in the file ‘/var/log/secure’, one could see something similar to below lines :

Jun 18 05:29:18 nagios sshd[5413]: Received signal 15; terminating.
Jun 18 05:29:18 nagios sshd[5612]: Server listening on port 22.
Jun 18 05:29:18 nagios sshd[5612]: Server listening on :: port 22.

Using ps command (this would show the time when service got started) :

[root@nagios Desktop]# ps -p $(pgrep sshd|head -1) -o lstart
Thu Jun 18 05:29:18 2015

Replace the service name with respective service name to check far as required. To find out the most recent 'httpd' service restart time :

[root@nagios Desktop]# ps -p  $(ps -C httpd -o pid=|head -1) -o lstart
Thu Jun 18 05:34:35 2015

Otherwise, check in httpd log files, you would similar lines like below:

[Thu Jun 18 05:34:35 2015] [notice] caught SIGTERM, shutting down
[Thu Jun 18 05:34:36 2015] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 configured -- resuming normal operations

(8) What is an elevator (disk elevator) or IO scheduler?

ANS:- It is an algorithm that is used by storage sub-system which takes care of how data gets rearranged when called in for read/write and merges the requests in a way which would be efficient to the system.

To find out the elevator being used on a disk :
# cat /sys/block/<DEVICE>/queue/scheduler

[root@ansible-host ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]

To change an elevator on the fly:

[root@ansible-host ~]# echo deadline > /sys/block/sda/queue/scheduler
[root@ansible-host ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq

To make changes permanent, add the elevator parameter (elevator=deadline) to the default boot kernel in respective grub configuration file.

Different elevator methods being used:

--"-- Noop --"---

As the name implies this does nothing of data re-order or queuing, it just manages data as First-In-First-Out basis. This would be ideal when there is a separate storage based controller which does better data reordering and understanding of disk layout so that the kernel workload would get reduced. This would be an ideal option when SSD drives are used.

--"-- Anticipatory --"--

In this disk elevator algorithm, each read/write requests would wait a short time before sending a request to see if another read/write request for nearby sector is coming up. So, each such requests would have "antic_expire" parameter set measured in milliseconds. If so, then it waits for another antic_expire and this continues. However, each such requests whether read/write would also have "read/write expire" parameter set which would serve the requests after the timeout count goes 0 (zero). This is ideal in case of contiguous reads such as FTP servers but not good for database servers.

--"-- Deadline --"--

Just like how anticipatory elevator works, the deadline also maintains read/write expiry time outs, however, doesn't wait for a nearby request before moving requests into queue. Requests in queue would get served in batches based on FIFO. This best suited for servers which does heavy read/write operations such as database servers.

--"-- CFQ (Completely Fair Queuing) --"--

This is designed for systems which does a lot of small disk read/writes and multiple processes would be generating disk IOs. This is the default elevator being used in RHEL 6. This would normally be used in Desktop system or Usenet servers.

The "mq-deadline" is the default IO scheduler in RHEL 8 and above, but it is "deadline" in case of RHEL7 systems. For SATA drives it is "cfq" by default.

=======Networking Related=======

(1) How to change the speed of a network interface to 100Mbps with auto-negotiation off and duplex in full mode (example for interface eth0)?

ANS : This could be done using the command “ethtool” as shown:

# ethtool -s eth0 speed 100 autoneg off duplex full
{changing the speed on the fly}

To make this changes persistent need to add the below line to the respective network configuration file ( in case network interface is etho then need to edit the file /etc/sysconfig/network-scripts/ifcfg-eth0 ):

ETHTOOL_OPTS="speed 100 autoneg off duplex full"

(2) How to find out the network routing table by using commands?

# route -n
# netstat -nr
# ip route

NOTE: In case of RHEL7 and above, the “route” and “netstat” commands are not available by default. To get these commands, need to install “net-tools” package.

(3) After changing the network card on a system the network interface name got changed from ‘eth0’ to ‘eth1’. This shows up when running the command "ifconfig -a". Running "service network restart" would throw up an error "Device eth0 doesn't seem to be present". But when running the command "ifconfig -a" one could notice that ‘eth1’ is listed there instead of ‘eth0’. So, running "ifup eth1" shows error as "configuration for eth1 not found". How could this issue could be rectified? How to get the network interface name be changed from ‘eth1’ to ‘eth0’ as it was before? [applicable to older releases such as RHEL6 and below]

ANS : Follow these steps to fix this issue:

→ Make a note of the hardware or MAC address of the second network interface ( in this case this is eth1 ). Run the command “ifconfig eth1” to get the details.

→ Change the working directory into ‘/etc/sysconfig/networking-scripts’.

→ Edit the respective network interface configuration file (eth0 in this case) and change the "HWADDR" to match the new address. Save and Exit.

→ Now, edit "/etc/udev/rules.d/70-persistent-net.rules” file.

→ Next comment out the line of ‘eth0’ here and change the name of ‘eth1’ to ‘eth0’ and then save and exit.

→ Reboot the system. {this should fix the issue, tested on RHEL6.5 system}.

NOTE: This is not the case in RHEL7 and above. There is a change in how network interface names are detected and labelled. Instead of traditional 'ethx' naming conventions, it depends on a few parameters. The 'udev' supports different naming schemes and based on firmware, topology, and locations of network interface.

Assigning fixed names based on firmware/topology/location information has the big advantage that the names are fully automatic, fully predictable, that they stay fixed even if hardware is added or removed (i.e. no re-enumeration takes place) and that broken hardware can be replaced seamlessly. That being said, they admittedly are sometimes hard to read than the 'eth0', 'wlan0' etc,. that everybody is used to. One example of interface names being used in RHEL7 is "enp5s0". By default, 'systemd' would create network interface names depending on the naming schemes as set.


If the environment doesn't permit restart of the system, then one could use "udevadm trigger" command to get the changes done without restart as explained below:

→ First make sure that the respective interface config file is present, if not then need to create one manually. The interface name was changed from ‘eth0’ to ‘eth1’ here, hence need to look out for ‘ifcfg-eth0’ interface file under ‘/etc/sysconfig/network-scripts’ directory and change the "HWADDR" and other fields as necessary (these options such as "Hwaddr", "UUID" are not mandatory, these parameters could be dropped if not required).

→ Stop the network service.

# service network stop  (assuming NetworkManager is not being used )

→ Now, navigate to ‘/etc/udev/rules.d/’ directory and edit "70-persistent-net.rules" file. Make sure the hardware address recorded here is correct and change the "NAME" attribute value to match your needs, save it and exit.

→ Trigger the udevadm command to re-read network device rules and reload it:

# udevadm control --reload-rules
# udevadm trigger --verbose --subsystem-match=net

→ Now, check out the currently available network interfaces. This time it should show up the primary network interface name i.e. eth0.

# ip addr show

→ Start the network service.
# service network start

→ If for some reason the interface name doesn't change, then unload network module and reload it (this would drop off current connection, please be aware). If there is no direct access to systems then run a simple "at" command with required time stamp to load network modules, after which connections would work.

(4) How to fix/troubleshoot "no network" or "network down" or "unable to ping remote host" or "localhost doesn't ping" problems?

ANS : When network is down or unable to ping remote (another) system, it would be necessary to start checking the following things sequentially:

Step 1:  Check if local network interface is working

- Ping localhost to confirm that local network interface is up and required network modules are loaded. If unable to ping localhost check for following things:

- Check if "lo" (loop-back) interface is up ( # ip a s lo )

- You should see "LOOPBACK, UP, LOWER_UP" line in the output. If not then bring up the loopback interface : # ip link set up lo

- Check if address is mapped with localhost in ‘/etc/hosts’ file.

Step 2: Check if at least one network interface is up on system

# ip addr show

- This should show up a valid IP addresses and also would indicate if network is active (UP). If network is down for any interfaces, then bring it up by running the command as shown below (considering the network interface as enp0s3):

# ip link set up enp0s3

- If unable to get a valid IP Address, then need to check how IP Address is being provisioned, it could be static or dynamic. If static, then check in ‘/etc/sysconfig/network-scripts/ifcfg-enp0s3’ file (I've taken eth0 as an example here) and look for valid entries as shown below:

- Still unable to bring up network interface then restart the network service.
# systemctl restart NetworkManager.service

- If it is dynamic mode of IP Address then check if there is any problem on the DHCP server. Try to set a test (static) IP Address and check if that works.

Step 3: Next step is to ping the hostname to confirm that name resolution works good
# ping $(hostname -s)

- If the above ping fails then ping the IP Address directly and test if that works. If so then problem is with name resolution (DNS), so it should be either addressed at DNS level or fixing ‘/etc/hosts’ file. If there is no DNS server involved and entry is found in /etc/hosts, still ping fails then make sure the 'hosts' line entry in '/etc/nsswitch.conf' reads 'files' as the first entry as shown below:

[root@node1 ~]# grep ^hosts /etc/nsswitch.conf hosts: files dns myhostname

Step 4: Ping another host/node OR Gateway address and check if that works

- Now, ping another host/node on the same network group. If ping success then we could say routing is working, otherwise, check if gateway is configured properly. Ping the gateway address and check.

- Unable to ping another remote host then problem could be at the router end or routing problem. So, run ‘tracepath’ (earlier it was traceroute) command at this stage which would facilitate in identifying the cause, and this doesn't need root privilege's.

Step 5: Unable to ping anything either localhost or remote host or loopback address

- This could be a problem either with native firewalld (iptables), otherwise, with sysctl settings configured on the system.

- Check if ‘net.ipv4.icmp_echo_ignore_all’ has been set in ‘/etc/sysctl.conf’ file, if so, please unset this.
1- Verify : # sysctl -n net.ipv4.icmp_echo_ignore_all
2- Unset : # sysctl -w net.ipv4.icmp_echo_ignore_all=0
3- Permanent change: Check in '/etc/sysctl.conf'' or drop in folders and make changes if so.

- If firewall problem, then make sure "icmp" protocol and "lo" interface is allowed for communication. One has to verify that the ‘icmp-blocks’ are not set on default zone by running the command ‘firewall-cmd --list-all ’.

(6) What are the alternative steps that could be used to test a remote server alive status when ping check fails (if blocked by firewalld or iptables)?

ANS  :

→ Ping failure doesn't necessarily mean that remote host is down. It could be possible that the ICMP replies would have been disabled (via sysctl) or blocked by firewall rules. If blocked by firewall rules, then there would be 'destination port unreachable' message shown when someone tries to ping remote host. However, there could be no message shown in case firewall is configured to DROP ICMP requests.

→ In this case, one could try to check if connectivity works via SSH. Try running SSH in verbose mode (ssh -v username@RemoteHostAddress). If this fails, then this would indicate either SSH service is down or user-restricted or blocked by firewall on remote host. Again, if firewall is configured to DROP requests on port 22 (default SSH port) then user would not get any notifications, but if configured with REJECT rules then user would get message as 'connection refused'.

→ At this stage we are still unsure if remote host is up when both ping & SSH fails to get an acknowledgement. At this juncture, one may use the most common 'curl' command to test if remote host is listening on a port. This would also not respond if port is blocked by firewall, otherwise, shows message as 'failed to connect' & 'No route to host'.

A successful check:

[root@node1 ~]# curl telnet:// -v * Rebuilt URL to: telnet:// * Trying * TCP_NODELAY set * Connected to ( port 22 (#0) SSH-2.0-OpenSSH_7.4

An successful check:

[root@node1 ~]# curl telnet:// -v * Rebuilt URL to: telnet:// * Trying * TCP_NODELAY set * connect to port 22 failed: No route to host * Failed to connect to port 22: No route to host * Closing connection 0 curl: (7) Failed to connect to port 22: No route to host

→ Next step when telnet also doesn't show up successful connection is to use ‘nmap’ (belongs to nmap package) command which would scan the remote host and provides details about host live status along with ports which are filter/unfiltered.

# nmap -sn <RemoteHostIP>

→ This command would perform only host discovery without port scanning. This would fetch quick discovery of hosts alive status and one could see the message 'Host is up' as shown below:

Use "nmap -sn <RemoteHostIP> -n" to ignore name resolution. If the nmap command returns "0 hosts up" or "Host seems to be down" then it would indicate a problem with remote system or network.

→ Also there is "netcat" utility (need to install this package) available which is another handy network tool which does a lot of functions including port scanning, so this could be used to test if a port on a remote host is allowed or blocked via firewall.

# nc -z <RemoteHostAddress> <PortNumber>

[root@server1 Desktop]# nc -z 22

Connection to 22 port [tcp/ssh] succeeded!

=======Security Related=======

(1) How do you backup and restore iptables (configurations)?

ANS : Using the command ‘iptables’ as shown below:

# iptables-save > /tmp/iptables.out
# iptables-restore < /tmp/iptables.out

To know more about 'firewalld' which has replaced 'iptables' in RHEL7 and above, visit the below link:

(2) What do the characters "S" or "s" in the execute bit location of a user permission indicates?
ANS : A big letter "S" indicates that the SUID has been set, however, execute permission is not set. However, a small letter "s" indicates SUID has been set with execute permission.

(3) How do you provide a user an exclusive permission to 'shutdown' or 'reboot' system?

ANS : Check the below blog post which provides details on how to get this done:

(4) How to disable password-less login for root user in single user mode?

ANS : Change the line that reads "SINGLE=/sbin/sushell" to "SINGLE=/sbin/sulogin" in "/etc/sysconfig/init" file.

This prompts for user (root) to enter a valid password to authenticate.  However, if "init=/bin/sh" is passed as grub parameter then system would boot without prompting for password.

This is not required in RHEL7 and above, as by default single user mode login is password protected and user would need to enter root password to login.

(5) How to temporarily disable all user login except root user (either via SSH or terminal or in GUI)?

ANS : This could be achieved by creating ‘/etc/nologin’ file (as root user). If this file exists, then any user who tries to log-in would get rejected and only root user would be allowed (the root user may not be allowed to login via SSH if "PermitRootLogin" is set to "no" in /etc/ssh/sshd_config).

(6) How to disable "Restart" & "Shutdown" buttons on the GUI Login screen?

ANS : In RHEL 6.x, you would need to use ‘gconftool-2’ or ‘gconf-editor’ commands for this purpose.

Using gconftool-2 command:

To disable "Restart" & "Shut Down" buttons on the GUI login screen, run the below command as root user:

# gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.defaults --type bool -- set /apps/gdm/simple-greeter/disable_restart_buttons true

These are defined as schemas in the file "/etc/gconf/schemas/gdm-simple-greeter.schemas".

Same way, if you wish to disable user list, run the below command as root user:

# gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.defautls --type bool --set /apps/gdm/simple-greeter/disable_user_list true

Using gconf-editor command:

Run "gconf-editor" command, expand "apps" and then "gdm", click on "simple-greeter" and then check "disable_restart_buttons", then right click on "disable_restart_buttons" and choose "Set as Default".


The simplest way to achieve this in RHEL5.x is to run the command "gdmsetup" in GUI as root user, un-check "Show Actions Menu" under "Menu Bar" in "local" tab. This would remove "Restart" & "Shut Down" buttons on the GUI log-in screen.

In RHEL 7 and above:

Edit (as root user) "/etc/dconf/db/gdm.d/01-custom-gdm-settings" file and add the following lines:


Then update "dconf" database using the command "dconf update", finally restart gdm service "systemctl restart gdm"

(7) What does the umask value of 0022 indicates for a root user?

ANS : Before understanding this, one must understand the numerical & symbolic values being used to represent permission bits in Unix environment. It is as shown below:

r  - “read" permission - numerical equivalent value "4"

w - "write" permission - numerical equivalent value "2"

x  - "execute" permission - numerical equivalent value "1"

s -  "special" permission bit - numerical equivalent "4" for SUID (SetUserID), "2" for SGID(SetGroupID) & "1" for Sticky-bit.
u - "user"  
g - "group"
o - "others"  

Combining these bits:

read & write -   This is shown as “rw” symbolically and numerically as “4+2” i.e 6.

read & write & execute - This is shown as “rwx” symbolically and numerically as “4+2+1” i.e 7.

read & execute - This is shown as "rx" symbolically and numerically as "4+1" i.e 5, applies to directories only.

The below picture would show permissions bits and how to read on a file/directory:

Set/Unset Permissions:

This is done using “chmod” command. For example, to set only "read & write (rw)" permission for ‘owner’(u), no permissions for ‘group’(g) and others(o) then this could be done like below:
Numerical way (octal representation):
# chmod 600 <filename>


Symbolic way:

# chmod u+rw,go-rwx <filename>

Now, lets check what does 0022 umask value indicates:
0 - The first leftmost zero indicates special character bit. Since it is zero so nothing is masked.

0 - The second from left indicates user or owner level permissions to be masked. Since nothing is masked here all permission bits are set for "Owner".

2 - This indicates group level masking. So, for a file it is the 'write' (numerical value 2) bit which gets masked, hence, group users would only get 'read' permission. Like-wise, for directories, it would be only 'read & execute' bit allowed and 'write' bit is masked.

2 - This is for others(o). The same policy of masking applies as explained before for Others here. So, for any file it would be 'read' only and for directories it would be 'read and execute' which are the permission bit allowed. So, write permission on a file/directory for others is masked.
Saying so, when a root user creates a file/directory this umask bit would be used to set the effective permissions.

For a file it would be (666-022=644), rw-,r--,r-- (read & write, read, read) respectively for user, group and others (UGO). However, when a directory is created it would be (777-022=755) rwx,r-x,r-x for UGO. Same way the default umask value for other users is 0002.     
 ..................... to be continued :)


Anonymous said...

how to hide userlist and poweroff buttons in rhel7?

Anonymous said...

Good information! These linux administration interview questions and answers will really help freshers and experienced people. It would be better if you join linux administration online training program for better understanding of core Linux concepts.

Unknown said...

thanks a lot sir, post some more questions sir

Anonymous said...

Thanks for providing this informative information…..
You may also refer-

soumya said...

This is an excellent blog, Thanks for sharing most valuable information with us. keep share more on Devops Online Training Bangalore

Unknown said...

Nice blog thank you so much. If you want more information please check it once
Linux Administration Training in Hyderabad

Unknown said...

Thanks a lot for your effort .

padmini said...

Nice blog. It will helpful for interview perspective. Thanks for sharing. keep update..
Linux Online Training

Prabhu said...

Excellent blog.keep posting new interview quesstion

Anonymous said...

It is a good collection. I referred this site to prepare myself for a interview. Thank you..

BKP said...

Thank you for posting FAQs on Linux. Keep it updated.

Anonymous said...

It is a good collection of faqs.

Anonymous said...

Good, please write some more FAQs on security part. Thanks

Mohd Sharique said...

your blog' s design is simple and clean and i like it. Your blog posts about Online writing Help are superb. Please keep them coming. Greets!

Python Training In Pune
python training institute in pune

Easy Shiksha said...

Thanks for sharing such amazing content which is very helpful for us. Please keep sharing like this. Also check to learnLinux PAM Administration or many more.

Anonymous said...

Nice blog sir... thank you ����

Laks Smile said...

Thanks for writing wonderful article. Keep up your good work. Also visit our site to know Install MongoDB on Ubuntu 20.04

Anonymous said...

I switched my company after reading this article that helped me revise various topics. Now going for another interview. Thank you so much for this wonderful article. Keep going!

Yamini said...

I found this resource incredibly helpful for Linux system admin interview preparation. While it focuses on admin-related questions, it also underscores the importance of embedded Linux development services. These services play a crucial role in optimizing system performance and ensuring seamless integration with hardware, making them essential for any Linux professional.

Mishra A said...

I'm grateful for the valuable information you've shared. Your commitment to producing this content is truly commendable, and it doesn't go unnoticed. Thank you for your efforts!
Best Software Training Institute In Electronic City Bangalore

Anonymous said...

This blog is really creative and amazing. I love it. thanks for sharing with
lapel pins custom