Eflags Registers

This article talks about intel IA32 eflags register & some interesting things that i found out while studying more about these flags.  This article would be using gnu debugger(gdb) to show the status of eflags register.

First the theory about eflags registers:

eflags register in an IA32 processor stores various flags corresponding to the result of last instruction executed.

Not all instructions use eflags register like mov, bswap, xchg, but instructions like “inc” (increment), add (addition), mul, div instructions use eflags register.

First before we go further in to eflags, there are few points to remember.

  • We cannot examine the whole eflags register
  • There is no instruction that can be used to modify this register directly.
  • There are some instructions that can be used to modify certain bits of the register,  but they are beyond the scope of this article.

We will be looking at some of the flags of the register using simple examples:

  1. Carry Flag
  • Keeps the status of the final carry-out while computing the result of the last instruction set.
  • While adding 2-numbers the carry flag contains the carry-out of the most significant bit.
  • Example :
  • Adding 253 & 4,  For this example, we will use “al” register , which is lower 8-bits of EAX register
  • General Purpose Registers

    General Purpose Registers

    I choose this example specifically to view the Carry flag.   Since our number is less than 255 we will use lower 8 bits of eax register which is  al and will be adding 4 to 253 . Below is the sample code

Adding 2 numbers

Assembly Language Program in AT&T style

We assemble the above code using Gnu Assembler and loader.

add1.s

add1.s

we will use Gnu Debugger(gdb) to view the contents of the registers.

Gnu Debugger

Gnu Debugger

We will set the break point to line 4 and run the program , Type “n” to execute the line 4

Set break point and run the program

Set break point and run the program

Type “info registers”  at the gdb prompt to view the current value in registers

info registers

info registers

as we can see from the above figure, Register al is actually storing -3, instead of  253, this is because the range of numbers that can be stored in al is not from 0 to 255  but instead -128 to 127.

Type “n” or next to execute the line 5 of the program which adds 4 to register al.

CF is seen in gdb

CF is seen in gdb

when we do addition of 4 to -3 , the result is +1 , so the final value of register al is 0x01 which sets the Carry Flag (CF).  We can see from the above figure that eflags shows CF to be set as expected.

To check eflags register only we could type “info reg eflags” on gdb prompt.

2. Zero Flag

  • Zero flag is set to 1 if the result of the last flag-modifying instruction is 0

Examples:

adding negative & positive number

adding negative & positive number

In the above code we set 0xfd, which is -3 value set in register al, and then we add +3 to it. So when processor executes line 5 , the resultant value is 0.  So processor sets ZF in the eflags register. We can view this when we run the above program through gdb.

I will cover the rest of the eflags in next article.

Using gdb layout when debugging Assembly Language Programs

Tags

, , , , , , ,

In my quest to learn programming, I have started my initial steps with assembly Language programming (ALP). I have been on this endeavour from quite some time.

This post is not about ALP though but about an important option of gdb called layout. layout helps all the newbies who are learning ALP a lot. Before i explain about this option , Consider the below program:

code

Figure-1

The above program calculates sum of two values (17 and 15) using sum function in alp. The output is saved in ebx register. Let us first assemble and link the program. We are assembling our source code using GNU Assembler with option -gstabs to debug the assembly code through gdb.

From man as:
–gstabs:Generate stabs debugging information for each assembler line. This may help debugging assembler code, if the debugger can handle it.

GNU Assembler

Figure-2

From the Figure-2 we could see that program “mysum” runs successfully. Let us Run the program through gdb. What we want to know is the following:

i) Values of registers while the code is being executed at each step
ii) Most importantly we want to see the code and registers at the same time.

To accomplish the above goals gdb provides a Text User interface using curses library to show source file. This feature is not limited to source file only but also shows assembly code(asm), registers(regs). In our case we would require to view assembly code and registers and that too at the same time. To view the registers as our asm code executes.

gdb

Figure-3

We first start with invoking the program mysum with gdb and setting the break point at the start function:

breakpoint

Figure-4

After setting the break point run the program by typing “run” (or just “r”) at the gdb prompt which will stop at the first break point , which in our case is the _start function. At this point lets invoke “asm” layout by typing “layout asm” at the gdb prompt:

asm layout would look like this:

layout asm

Figure-5

In Figure-5, we could see our asm code with more details, like address where the particular instruction resides, and the instruction. Our current program counter starts at 0x8048054 which is the start of our program. From here we will keep stepping through our code and view the register values.

To load the register layout type “layout regs” at the gdb prompt and gdb would automatically split the TUI to show both asm code and registers as shown below:

alp7

Figure-6

In Figure-6, Instruction to be executed is highlighted and also we could see the values of registers. We will step through the code by typing “step” (or in short “s”) command at gdb prompt which will execute the earlier instruction and the code to be executed next is highlighted:

alp8

Figure-7

In Figure-7,  we could see now our Program counter points to next instruction at address 0x8048056 which is to push 0xf (15 to stack). And also our register layout shows EIP is pointing to code to be executed and current ESP register value.

As we keep stepping through our code (use “s” at gdb prompt) and when our code enters sum function we should be able to see our base pointer  register value  (EBP) and if it’s saved with value of ESP register:

alp12

Figure-8

Our code has passed values 17 and 15 to the stack and in sum function we are copying these values to General puspose registers ecx and ebx. Figure-9 shows that ecx and edx registers have been loaded with 17 and 15 as mentioned in the code.

alp14

Figure-9

Keep stepping through the code and once code reaches to the end of sum function where we exit the function by popping the stack we could see the Register values at register layout Restore the stack pointer and returning back to _start):

alp17

Figure-10

Once we are back to _start function we see that the sum of 17 and 15 is stored in ebx register, load “1” to eax register and send the interrupt to call the exit system call. The output of the program i.e sum of values 17 and 15 can be viewed by check the status of exit system call which is value in ebx register.

alp20

Figure-11

I hope the above information would be useful for newbies while debugging assembly language code.

Note: “layout regs” doesn’t yet work on gdb version “gdb-7.2-51.el6.i686” on RHEL6 . It crashes gdb. Fedora 15 and latest rawhide has the fix . Hope later versions of the gdb on RHEL6 might have the fix.

Authenticating using polkit to access libvirt in Fedora 18

Tags

, , , , ,

From Fedora-18 there has been some noticeable changes to polkit. Policy kit helps access to certain privileged process to unprivileged applications or users in this case. I generally use systems with SELinux Enabled and also confine my users. Since most of my job requires testing various applications , I keep creating a lot of vm’s (RHEL5,RHEL6). For this virt-manager is my preferred application.

Recently i have been assigned with a new Intel Hardware which has hardware Virtualization enabled with 1TB Hard disk. So installed Fedora-18 to create VM’s. My requirement is i should be able to install vm’s using Non-root user and that too with user who’s confined.

  • Create a user
        $ useradd test  
  • Map this user to staff_u selinux user
        $ semanage login -a -s staff_u test
    
        Login Name           SELinux User         MLS/MCS Range        Service
    
        __default__          user_u               s0                   *
        ceres                sysadm_u             s0-s0:c0.c1023       *
        juno                 staff_u              s0                   *
        root                 root                 s0-s0:c0.c1023       *
        system_u             system_u             s0-s0:c0.c1023       *
        test                 staff_u              s0-s0:c0.c1023       *
    
  • login as test user and connect to libvirt socket using virsh
        [mniranja@mniranja mar20]$ ssh test@10.65.201.167
        test@10.65.201.167's password: 
        Last login: Wed Mar 20 00:20:13 2013 from localhost
        [test@dhcp201-167 ~]$ id -Z
        staff_u:staff_r:staff_t:s0-s0:c0.c1023
    
  • Connect to libvirt socket
        [test@dhcp201-167 ~]$ virsh -c qemu:///system
        error: authentication failed: Authorization requires authentication but no agent is available.
    
        error: failed to connect to the hypervisor
    

As you can see above it doesn’t allow to connect , In earlier versions of Fedora, you could use policy kit to create a authorization rule to connect to libvirt socket. Refer Libvirt documentation. This method is also called Policy Kit LocalAuthority. So on Fedora-16 system i had the following rule

        [root@reserved 50-local.d]# cat 50-org.example-libvirt-remote-access.pkla 
        [Remote libvirt SSH access]
        Identity=unix-group:virt
        Action=org.libvirt.unix.manage;org.libvirt.unix.monitor
        ResultAny=yes
        ResultInactive=yes
        ResultActive=yes

The above would allow users of group “virt” to access libvirt and manage libvirt through policy kit action “org.libvirt.unix.manage” . The above rules are placed in file 50-org.example-libvirt-remote-access.pkla under directory “/etc/polkit-1/localauthority/50-local.d”.
I hoped the same would work on Fedora-18 but it doesn’t as Policy kit localAuthority has been removed totally, instead all the custom policy kit rules should be placed under /etc/polkit-1/rules.d/ directory. Syntax of writing rules has been changed and Java Script syntax need to be used. Refer DavidZ blog for more information regarding the change.

On Fedora-18 i managed to do the same by adding the following rule file 10.virt.rules created under /etc/polkit-1/rules.d directory

        [root@dhcp201-167 rules.d]# cat 10.virt.rules 
        polkit.addRule(function(action, subject) {
        polkit.log("action=" + action);
        polkit.log("subject=" + subject);
        var now = new Date();
        polkit.log("now=" + now)
        if ((action.id == "org.libvirt.unix.manage" || action.id == "org.libvirt.unix.monitor") && subject.isInGroup("virt")) {
        return polkit.Result.YES;
        }
        return null;
        });

Thanks To Gilbert , As you can see the above allows polkit action “libvirt.unix.manage” || “org.libvirt.unix.monitor” to all the users of group “virt”

  • Restart polkit service
        $ systemctl restart polkit.service
    
  • Add the user test to group virt
        $ usermod -aG virt test
    
  • login as test user and connect to libvirt using virsh
        [test@dhcp201-167 ~]$ id -Z
        staff_u:staff_r:staff_t:s0-s0:c0.c1023
    
        [test@dhcp201-167 ~]$ id
        uid=1002(test) gid=1003(test) groups=1003(test),1001(virt) context=staff_u:staff_r:staff_t:s0-s0:c0.c1023
    
        [test@dhcp201-167 ~]$ virsh -c qemu:///system
        Welcome to virsh, the virtualization interactive terminal.
    
        Type:  'help' for help with commands
           'quit' to quit
    
  • Check the logs using journalctl
        [root@dhcp201-167 ~]# journalctl -xn
        -- Logs begin at Tue 2013-03-19 22:54:05 EDT, end at Wed 2013-03-20 00:43:25 EDT. --
        Mar 20 00:43:02 dhcp201-167.englab.pnq.redhat.com kernel: usb 1-1.3: Product: USB Optical Mouse
        Mar 20 00:43:02 dhcp201-167.englab.pnq.redhat.com kernel: usb 1-1.3: Manufacturer: PixArt
        Mar 20 00:43:02 dhcp201-167.englab.pnq.redhat.com kernel: input: PixArt USB Optical Mouse as /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.3/1-1.3:1.0/input/in
        Mar 20 00:43:02 dhcp201-167.englab.pnq.redhat.com kernel: hid-generic 0003:0461:4E22.006D: input,hidraw0: USB HID v1.11 Mouse [PixArt USB Optical Mouse] on usb
        Mar 20 00:43:18 dhcp201-167.englab.pnq.redhat.com sshd[3722]: Accepted password for test from 10.3.235.177 port 53789 ssh2
        Mar 20 00:43:18 dhcp201-167.englab.pnq.redhat.com systemd-logind[596]: New session 18 of user test.
        -- Subject: A new session 18 has been created for user test
        -- Defined-By: systemd
        -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
        -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat
        -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/catalog/8d45620c1a4348dbb17410da57c60c66
        -- 
        -- A new session with the ID 18 has been created for the user test.
        -- 
        -- The leading process of the session is 3722.
        Mar 20 00:43:18 dhcp201-167.englab.pnq.redhat.com sshd[3722]: pam_unix(sshd:session): session opened for user test by (uid=0)
        Mar 20 00:43:25 dhcp201-167.englab.pnq.redhat.com polkitd[1688]: /etc/polkit-1/rules.d/10.virt.rules:2: action=[Action id='org.libvirt.unix.manage']
        Mar 20 00:43:25 dhcp201-167.englab.pnq.redhat.com polkitd[1688]: /etc/polkit-1/rules.d/10.virt.rules:3: subject=[Subject pid=3791 user='test' groups=test,virt,
        Mar 20 00:43:25 dhcp201-167.englab.pnq.redhat.com polkitd[1688]: /etc/polkit-1/rules.d/10.virt.rules:5: now=Wed Mar 20 2013 00:43:25 GMT-0400 (EDT)
    

Introduction to CTDB Cluster

Tags

, ,

Why CTDB ?

  • Traditionally Clustering involves a SAN connected to n nodes. The storage can be  accessed only by the nodes participating in the cluster and as the the need for more storage and users grow , space tends to be small and clustering becomes small
  • So we need a file system that can be accessed by arbitrary number of clients and not restricted to the systems participating in the cluster.  One of the answers to this problem is Distributed File system.
  • We need to distribute the existing shared storage using network protocols like NFS and cifs. With samba and CTDB we can achieve this goal of distributing the shared File  system using CIFS Protocol
  • CTDB is originally developed specifically as cluster Enhancement software and contains high availability, load balancing features which makes file services like samba, NFS and FTP cluster-able.

Basic Infrastructure of CTDB

  • Storage is attached to nodes participating in the cluster through FC or iscsi
  • Shared File system which supports POSIX-fcntl locks
      • IBM General Parallel File system (GPFS)
      • Global File system (GFS)
      • GNU Cluster File system (Gluster)
      • Sun’s Lustre
      • OCFS2

Basics of CIFS File system

  • CIFS (Common Internet File system is a standard remote file system access protocol for use over network, enabling groups of users to connect and share documents
  • CIFS is open, cross-platform based on SMB (Server Message Block) Protocol, which is native file-sharing Protocol in the Windows Operating system. On RHEL this is implemented using samba
  • CIFS runs over TCP/IP

Basics of Samba

  • Samba provides File and Print services for all the clients using SMB/CIFS protocols
  • Apart from File and Print services, it also does Authentication and authorization, Name
  • Resolution and Service Announcement
  • File and Print services are provided by the smbd daemon
  • Name resolution and browsing is provided by nmbd daemon
  • Configuration file is /etc/samba/smb.conf

TDB (Trivial Database)

  • Samba keeps track of all the information needed to serve clients in a series of *.tdb files
  • located in /var/lib/samba or /var/cache/samba
  • Some of the TDB files are persistent
  • TDB files are very small like Berkely database files
  • Allow multiple simultaneous writes

Example TDB Files:

  • account_policy.tdb    NT account policy settings such as pw expiration
  • brlock.tdb  Byte-range locks
  • connections.tdb Share connections (used to enforce max connections, etc…)
  • Messages.tdb Samba messaging system

What Does CTDB do ?

  • CTDB (Clustered Trivial Data Base) is a very thin and fast database that is developed for samba to make clusterize samba.
  • What CTDB does is to make it possible for Samba to run and serve the same data from several different hosts in network at the same time
  • which means samba becomes clustered service and and are active and exports the samba shares, read-write operations at the same time making it high-available.
  • To do above we require a method of communication (IPC) for samba daemons running between nodes and share some persistent data (TDB files). Some of the information that should be shared are:
  • User information
  • For samba acting as a member server of a Domain, The Domain SID should be shared
  • The user mapping tables Mapping of Unix UID’s and GID’s to Windows Users and Groups
  • The active SMB-sessions and connections are shared
  • locking information like byte-range locks granted exclusively to users to access a particular file have to be shared between all the nodes These locks are Windows Locks i.e when Multiple windows/samba clients access files these locks are given by smbd daemon so it makes sense to share these locks between smb daemons on different nodes

Sample Diagram on CTDB Messages are shared between 2 CTDB Clusters:

Below are the list of TDB files that are to be shared between CTDB Clusters:

  • SMB Sessions (sessionid.tdb)
  • share connections (connections.tdb)
  • share modes (locking.tdb)
  • byte range locks (brlock.tdb)
  • user database (passdb.tdb)
  • domain Join Information (secrets.tdb)
  • id mapping tables (winbind_idmap.tdb)
  • registry (registry.tdb)

Requirements to configure CTDB cluster on RHEL6

    • GFS Packages
    • HA Packages
    • ctdb, samba
    • ctdb-tools

Configuring samba to use CTDB

  • We require 2 separate networks, one internal network through which CTDB daemons communicate and one public network through which it offers cluster services like samba, NFS etc.
  • Install samba and CTDB Packages

$ yum install samba ctdb tdb-tools

  • Configure /etc/samba/smb.conf to make samba cluster aware, Add the below lines in “global” section of smb.conf

clustering = yes
idmap backend = tdb2

  • CTDB Cluster configuration

/etc/sysconfig/ctdb: is the primary configuration file and it contains startup parameters for ctdb. The important parameters are:

CTDB_NODES

CTDB_NODES=/etc/ctdb/nodes
This parameter specifies the file that needs to be created and should contain list of
Private IP address that CTDB daemons will use in the cluster. It should be a private
non-routable subnet which is only used for cluster traffic. This file must be same on all
nodes in the cluster.
contents of /etc/ctdb/nodes :
192.168.122.7
192.168.122.8
192.168.122.9
192.168.122.10

CTDB_RECOVERY_LOCK

This parameter specifies the lock file that the CTDB daemons use to arbitrate which
node is acting as a recovery master. This file must be held on shared storage so that all
CTDB daemons in the cluster will access/lock the same file.

CTDB_RECOVERY_LOCK=”/ctdb/cifs/lockfile”

CTDB_PUBLIC_ADDRESS
This parameter specifies the name of the file which contains the list of public addresses that particular node can host. While running, the CTDB cluster will assign each public address that exists in the entire cluster to one node that will host that public address. These are the addresses that the SMBD daemons and other services will bind to and which clients will use to connect to the cluster

Example 3 node cluster:
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses

Content of /etc/ctdb/public_addresses:

10.65.208.142/22 eth0
10.65.208.143/22 eth0
10.65.208.144/22 eth0

Configure it as one DNS A record (==name) with multiple IP addresses and let round- robin DNS distribute the clients across the nodes of the cluster

The CTDB cluster utilizes IP takeover techniques to ensure that as long as at least one node in the cluster is available, all the public IP addresses will always be available to clients.

/etc/ctdb/events.d This is a collection of scripts that is called to by CTDB when certain events occur to allow for site specific tasks to be performed

  • Start CTDB daemon and let ctdb call the smbd daemon, samba daemon should not be started by init process

#chkconfig ctdb on
#chkconfig smb off
#chkconfig nmb off

  • Start the ctdb daemon

# service ctdb start

Example Diagram of a 3 Node CTDB Cluster:

How does CTDB work

  • On each node CTDB daemon “ctdbd” is running, samba instead of writing    directly to TDB databases it talks via local “ctdbd”
  • “ctdbd” negotiates the metadata for the TDB’s over the n/w
  • For actual read and write operations local copies are maintained on fast local storage
  • We have 2 kinds of TDB files Persistent &  Normal
  • Persistent TDB files should always be up-to-date and each node always has a updated copy. These TDB files are kept locally (LTDB) on the local storage and not on the shared storage. So the read and write operations are faster
  • The node when wants to write to Persistent TDB, it locks the whole database , perform read and write operations and the transaction commit operations is finally distributed to all nodes and also written locally
  • Normal TDB files are maintained temporarily . The idea is that each node doesn’t have to know all the records of a database. It’s sufficient to know the records which affect it’s own client  connections, so when the node goes down it is acceptable to lose those records
  • Each node carry certain roles
    • DMASTER (data master)
      • Current, authoritative copy of a record
      • Moves around as nodes write to a record
    • LMASTER (Location Master)
      • knows the location of the DMASTER
      • Knows where the record is stored
  • Only one node has the current authoritative copy of a record, i.e data master
      • Step-1: Get a lock on a record in TDB
      • Step-2: Check if we are on Data master
        • if we are DMASTER for this record
        • then operate on the record and unlock it when finished
      • Step-3: if we are not DMASTER for this record unlock the record
      • Step-4: Send a request to the local CTDB daemon to request the record to be migrated on to this node
      • Step-5 once we get a reply from local “ctdb” daemon that the record is now   locally available, go to step-1

Failover

  • CTDB assigns IP address from the pool (CTDB_PUBLIC_ADDRESS) to the healthy node
  • When the node goes down IP is moved to other node
  • Client reconnects to the new node using tickle ACKs if the below conditions are met :
    • Node goes down
    • Client doesn’t know yet that ip has moved
    • New node sends TCP ACK with seq 0 to the client
    • client sends correct ACK to the client
    • New node resets the connection using RST
    • client re-establishes connection to new node
  • recovery master – performs recovery , collects most recent copy from all nodes and recovery master becomes data master.
  • Recovery master is determined by the election process, the RECOVERY_LOCK file acts as arbitrator and nodes compete to get a lock (POSIX fcntl byte-range) on that file.
  • If the Recovery master node is gone, we need to assign the role to a new node.

Commands to manage CTDB

$ctdb status: status command provides basic information about the cluster and the status of the nodes.

$ctdb ping: This command tries to ping each of the CTDB daemons in the cluster

$ctdb ip: This command prints the current status of the public ip addresses and which physical node is currently serving that ip

$onnode: onnode is used to run commands on ctdb nodes

examples:
$onnode all pidof ctdbd
$onnode all netstat -tn | grep 4379

CTDB Status Messages:

“ctdb status” specifies the node status , There are 5 possible states:

  • ok This node is fully functional
  • DISCONNECTED This node could not be connected through the network and is currently not participating in the cluster.
  • UNHEALTHY ctdbd daemon is running but the service provided by ctdbd has failed
  • BANNED Too many failed too many recovery attempts and is banned from participating in cluster for a period of “RecoveryBanPeriod” seconds
  • STOPPED A node that is stopped does not host any Public IP address and is not part of the cluster

Troubleshooting:

  • ctdb log file /var/log/log.ctdb
  • Output of “ctdb status” and “onnode” commands output would be helpful
  • /var/log/samba contains logs related to smbd daemon
  • If needed tcpdump on port 4379 can be taken, wireshark is capable of identifying CTDB protocol and can display various CTDB status
  • testparm output to check if clustering is enabled ?

Documentation

Man Pages:

man ctdb
man ctdbd
man onnode

Links:

FAQ ?

Q) When CTDB itself a cluster why do we require HA Packages to be installed like cman ?

CTDB will not work without Red Hat Cluster Suite. As CTDB requires gfs and gfs in-turn
requires cman to start dlm_controld and gfs_controld. So CMAN is pre requisite for CTDB

Q)How does CTDB solve the split-brain problem

Well this problem doesn’t arise in the first place as CTDB is all-active node and not active/passive setup where passive nodes suddenly become active.

Q)How to identify which node is actually serving or which node is Data master

The ip to which client is connected is the data master, i.e from the pool of public addresses , to which ever ip the client connects and the node to which the ip is assigned becomes Data  master (DMASTER)

Q) How to identify which node is recovery master (RMASTER)

The node which holds the lock file . The lock file is the file which is saved in the shared file
system. (CTDB_RECOVERY_LOCK)


Using Openssl on RHEL6 in FIPS-140 mode and generating Certificates.

Tags

,

For long time I have been trying to understand  FIPS-140 Certification and it’s effects. Today, I finally got to  configure RHEL6 system in fips mode and use openssl commands. Before we go and play with it, A brief Intro on what  FIPS and Openssl is.

FIPS-140 standard specifies the security requirements for a cryptographic module utilized within a security system protecting sensitive information in computer and telecommunication systems.   US national Institute of Standards and Technology(NIST) publishes FIPS series of standards for the implementation of Cryptographic modules. The Cryptographic Module Validation Program (CMVP) validates cryptographic modules to Federal Information Processing Standard (FIPS) 140-2 and other cryptography based standards.

FIPS 140-2 is primarily of interest to U.S., Canadian, and UK government agencies which have formal policies requiring use of FIPS 140 validated cryptographic software.

Products that have received a NIST/CSE validation are listed on the Cryptographic Module Validation List at http://csrc.nist.gov/cryptval/140-1/1401val.htm

OpenSSL is a Open Source software Implementing SSLv2/V3, TLS protocols and also provides general purpose Crypto libraries (aka libcrypto, libssl etc).

The intention of this article is to specify on how fips should be enabled on RHEL6 and to use approved ciphers with openssl.

Before we start using openssl and use FIPS approved security functions, The operating system has to be brought under fips mode, For that we need to rebuilt the initramfs with fips , prelink should be undone on all the libraries. I have enumerated the steps below.

Below are the steps to put RHEL6 system in FIPS mode  and use openssl with fips approved security functions.

Disable prelinking

change the line "PRELINKING=yes" to "PRELINKING=no" in /etc/sysconfig/prelink

For libraries that were already prelinked, the prelink should be  undone on all the system files using the following command:

$ prelink -u -a

initramfs should be regenerated with fips , to do that install dracut-fips package

$ yum install dracut-fips

Edit /etc/grub.conf  and add fips=1 to the end of the “kernel” line and reboot the system

kernel /vmlinuz-2.6.32-131.0.15.el6.x86_64 ro root=/dev/mapper/myvg-rootvol rd_LVM_LV=myvg/rootvol rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto fips=1 

For generating Certificates, openssl should be used only with specific set of Approvied Security Functions. For the list of Approved Security functions  that can be used refer NIST

In Brief below below algorithms can be used for signing, hashing and encyrption:

  • Symmetric Key (AES, TDEA and EES)
  • Asymmetric Key (DSS – DSA, RSA and ECDSA)
  • Secure Hash Standard (SHS)  Secure Hash Standard (SHS) (SHA-1, SHA-224, SHA-256, SHA-384 and SHA-512)
  • Message Authentication (Triple-DES, AES and SHS)

To check if openssl is operating under fips mode,  issue the following

$ openssl md5 somefile

The above should fail as MD5 is not a fips approved Hash Standard.

$ openssl sha1 somefile

The above would work as SHA1 is the fips Approved Hash Standard.

Lets generate Self-signed CA certificate

1. Generate the key

$ openssl genrsa  1024 > dhcp210-11.key

2. Convert the key to PKCS8 Format

The encryption used in the genrsa command cannot be used in the FIPS mode as it uses MD5 to convert the password to a key. We have to either write it unencrypted (no -des3 option) and then convert it using the ‘openssl pkcs8’ command.

if we need it encrypted, or generate the key  using -newkey option during the openssl req command which already writes it encrypted in the pkcs8 format.

$ openssl pkcs8 -in dhcp210-11.key -topk8 -out dhcp210-11-enc.key -v1 PBE-SHA1-3DES

3. Create a Self signed CA certificate.

$ openssl req -new -x509 -key dhcp210-11-enc.key -out dhcp210-11.crt -days 366

or skip step-1 and 2 and generate key inplace (-newkey option) which encrypts private key using pkcs8 format

$ openssl req -new -x509 -newkey rsa:1024 -out dhcp210-11.crt -days 365

References:

1. http://csrc.nist.gov/publications/PubsFIPS.html

2. www.openssl.org

Renewing self signed CA Certs using certutil

Tags

, , ,

This is an how-to article on renewal of self-signed CA Certs using Certutil Commands. To create self signed Certificate authorities and other certificates , Refer the Mozilla Documentation.

As normal User or Server Certificates Expire, the CA certs also do expire after certain period. But one needs to know how to renew.

Since this How-to is based on mozilla NSS. I will explain with an example NSS database where a CA and user certs are created using certutil Commands.

$certutil -L -d /etc/pki/testca

Certificate Nickname          Trust Attributes
                          SSL,S/MIME,JAR/XPI
testca                   CTu,u,u
www                     u,u,u

testca is the CA certificate and www is a user cert

$certutil -L -d /etc/pki/testca -n testca | head -n 15
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 0 (0x0)
        Signature Algorithm: PKCS #1 SHA-1 With RSA Encryption
        Issuer: "CN=rootca0,O=Example.com,C=US"
        Validity:
            Not Before: Tue Nov 01 02:29:56 2011
            Not After : Thu Dec 01 02:29:56 2011
        Subject: "CN=rootca0,O=Example.com,C=US" 

To view the private key, issue the below command :

 $ certutil -K -d /etc/pki/testca
certutil: Checking token "NSS Certificate DB" in slot "NSS User Private Key and Certificate Services"
 Enter Password or Pin for "NSS Certificate DB":
 < 0> rsa 2caa8cf41a5fc803902034710f59c296326cdcc8 NSS Certificate DB:testca
  < 1> rsa      99059e9f59b710edcee11d4bd32fd97977bc121e   NSS Certificate DB:www

From the above output you could see the Nick of the private key used by testca

The procedure to renew the testca Certificate is:

1. Create a certificate request using the same Private key

2. Get it signed by the Old CA

3. Add the newly signed certificate CA to NSS database

Creating a Certificate request using the same Private key:

$certutil -d . -R -k "NSS Certificate DB:testca" -s "CN=rootca0,o=Example.com,c=US" -a -o rootca.req
Brief Explanation of the command options:

-R:  Create a certificate-request file that can be submitted to a Certificate Authority (CA) for processing into a finished certificate. Output defaults to standard out unless you use 
-o output-file argument.
-s: subject of the Certificate ( Use the same Subject of earlier CA)
-m: serial Number
-v: Period in Months till which Certificate will be valid

Sign the Certificate Request

$certutil -C -d . -c "testca" -a -i rootca.req -t "CT,," -o cacert.crt  -m 0 -v 12

Add the Certificate to NSS database:

 $certutil -A -d . -n "testca" -a -i cacert.crt -t "CT,,"

List the CA cert to check the validity period

$certutil -L -d . -n testca
-----BEGIN CERTIFICATE-----
 MIIB4jCCAUugAwIBAgIFAJYUeXowDQYJKoZIhvcNAQEFBQAwNTELMAkGA1UEBhMC
 VVMxFDASBgNVBAoTC0V4YW1wbGUuY29tMRAwDgYDVQQDEwdyb290Y2EwMB4XDTEx
 MTEwMTAzMTczMloXDTEyMTEwMTAzMTczMlowNTELMAkGA1UEBhMCVVMxFDASBgNV
 BAoTC0V4YW1wbGUuY29tMRAwDgYDVQQDEwdyb290Y2EwMIGfMA0GCSqGSIb3DQEB
 AQUAA4GNADCBiQKBgQDHiALVOGuCo2c0xjIXqL5Q6RBSUva/b+NivWk9knSpe998
 yFQ7mzbi8g4EzlOt896iVLkjiekSbtffxx6ye5ruGfwddpo6AnpXMhZvG7DKrWpZ
 4CD1EPpW++DszuKBoZE50rcdHZC2o6iMAm2POXWCaHIapPfXbdahuyQQtgC+RQID
 AQABMA0GCSqGSIb3DQEBBQUAA4GBALVoqevbP7haPKPyZwgD4kB4OofOc8z22KZh
 +/KTai5RgnXbiGRK0hpV/imHC6j2KrPb3awmUTMXzWjQ9Pj4f4nuKFmM2QY8Vspb
 PziB7IPlxKh1m30QZzVJHlTL7uMMFud5CJVSb1iB4J6BackhN+5MTGZRytXfN9A2
 pHPzcjQM
 -----END CERTIFICATE-----
 -----BEGIN CERTIFICATE-----
 MIIB8DCCAVmgAwIBAgIBADANBgkqhkiG9w0BAQUFADA1MQswCQYDVQQGEwJVUzEU
 MBIGA1UEChMLRXhhbXBsZS5jb20xEDAOBgNVBAMTB3Jvb3RjYTAwHhcNMTExMTAx
 MDIyOTU2WhcNMTExMjAxMDIyOTU2WjA1MQswCQYDVQQGEwJVUzEUMBIGA1UEChML
 RXhhbXBsZS5jb20xEDAOBgNVBAMTB3Jvb3RjYTAwgZ8wDQYJKoZIhvcNAQEBBQAD
 gY0AMIGJAoGBAMeIAtU4a4KjZzTGMheovlDpEFJS9r9v42K9aT2SdKl733zIVDub
 NuLyDgTOU63z3qJUuSOJ6RJu19/HHrJ7mu4Z/B12mjoCelcyFm8bsMqtalngIPUQ
 +lb74OzO4oGhkTnStx0dkLajqIwCbY85dYJochqk99dt1qG7JBC2AL5FAgMBAAGj
 EDAOMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAP6F9K/y+WcL4tLij
 5vmxdDK+iV/jRktQc0/QugpUUcoWT7pRVsGfsYhAUYMhlZmnxHuQeLp13xPn1FcY
 DaojOPoQCifadC0OvlOivTnxQNU1nOLvWuYTfVoQq79Ji5fZVywQ5T41irV5uvGb
 hU00Ebw6/UtJOA4TwaIgXDSs45g=
 -----END CERTIFICATE-----

As you can see above , it lists both the certificates (old and new), Remove -a option in the above command to see in pretty print output

Certificate:
 Data:
 Version: 3 (0x2)
 Serial Number:
 00:96:14:79:7a
 Signature Algorithm: PKCS #1 SHA-1 With RSA Encryption
 Issuer: "CN=rootca0,O=Example.com,C=US"
 Validity:
 Not Before: Tue Nov 01 03:17:32 2011
 Not After : Thu Nov 01 03:17:32 2012
 Subject: "CN=rootca0,O=Example.com,C=US"
 Subject Public Key Info:
 Public Key Algorithm: PKCS #1 RSA Encryption
 RSA Public Key:
 Modulus:
 c7:88:02:d5:38:6b:82:a3:67:34:c6:32:17:a8:be:50:
 e9:10:52:52:f6:bf:6f:e3:62:bd:69:3d:92:74:a9:7b:
 df:7c:c8:54:3b:9b:36:e2:f2:0e:04:ce:53:ad:f3:de:
 a2:54:b9:23:89:e9:12:6e:d7:df:c7:1e:b2:7b:9a:ee:
 19:fc:1d:76:9a:3a:02:7a:57:32:16:6f:1b:b0:ca:ad:
 6a:59:e0:20:f5:10:fa:56:fb:e0:ec:ce:e2:81:a1:91:
 39:d2:b7:1d:1d:90:b6:a3:a8:8c:02:6d:8f:39:75:82:
 68:72:1a:a4:f7:d7:6d:d6:a1:bb:24:10:b6:00:be:45
 Exponent: 65537 (0x10001)
 Signature Algorithm: PKCS #1 SHA-1 With RSA Encryption
 Signature:
 b5:68:a9:eb:db:3f:b8:5a:3c:a3:f2:67:08:03:e2:40:
 78:3a:87:ce:73:cc:f6:d8:a6:61:fb:f2:93:6a:2e:51:
 82:75:db:88:64:4a:d2:1a:55:fe:29:87:0b:a8:f6:2a:
 b3:db:dd:ac:26:51:33:17:cd:68:d0:f4:f8:f8:7f:89:
 ee:28:59:8c:d9:06:3c:56:ca:5b:3f:38:81:ec:83:e5:
 c4:a8:75:9b:7d:10:67:35:49:1e:54:cb:ee:e3:0c:16:
 e7:79:08:95:52:6f:58:81:e0:9e:81:69:c9:21:37:ee:
 4c:4c:66:51:ca:d5:df:37:d0:36:a4:73:f3:72:34:0c
 Fingerprint (MD5):
 2B:90:4E:AE:E5:91:37:20:AD:41:A2:B1:4A:CC:16:A5
 Fingerprint (SHA1):
 DA:6C:F5:A1:A1:03:9B:6E:11:2C:BF:FA:DA:43:5C:D1:52:0B:B5:1B
 Certificate Trust Flags:
 SSL Flags:
 Valid CA
 Trusted CA
 User
 Trusted Client CA
 Email Flags:
 User
 Object Signing Flags:
 User
 Certificate:
 Data:
 Version: 3 (0x2)
 Serial Number: 0 (0x0)
 Signature Algorithm: PKCS #1 SHA-1 With RSA Encryption
 Issuer: "CN=rootca0,O=Example.com,C=US"
 Validity:
 Not Before: Tue Nov 01 02:29:56 2011
 Not After : Thu Dec 01 02:29:56 2011
 Subject: "CN=rootca0,O=Example.com,C=US"
 Subject Public Key Info:
 Public Key Algorithm: PKCS #1 RSA Encryption
 RSA Public Key:
 Modulus:
 c7:88:02:d5:38:6b:82:a3:67:34:c6:32:17:a8:be:50:
 e9:10:52:52:f6:bf:6f:e3:62:bd:69:3d:92:74:a9:7b:
 df:7c:c8:54:3b:9b:36:e2:f2:0e:04:ce:53:ad:f3:de:
 a2:54:b9:23:89:e9:12:6e:d7:df:c7:1e:b2:7b:9a:ee:
 19:fc:1d:76:9a:3a:02:7a:57:32:16:6f:1b:b0:ca:ad:
 6a:59:e0:20:f5:10:fa:56:fb:e0:ec:ce:e2:81:a1:91:
 39:d2:b7:1d:1d:90:b6:a3:a8:8c:02:6d:8f:39:75:82:
 68:72:1a:a4:f7:d7:6d:d6:a1:bb:24:10:b6:00:be:45
 Exponent: 65537 (0x10001)
 Signed Extensions:
 Name: Certificate Basic Constraints
 Data: Is a CA with no maximum path length.
 Signature Algorithm: PKCS #1 SHA-1 With RSA Encryption
 Signature:
 3f:a1:7d:2b:fc:be:59:c2:f8:b4:b8:a3:e6:f9:b1:74:
 32:be:89:5f:e3:46:4b:50:73:4f:d0:ba:0a:54:51:ca:
 16:4f:ba:51:56:c1:9f:b1:88:40:51:83:21:95:99:a7:
 c4:7b:90:78:ba:75:df:13:e7:d4:57:18:0d:aa:23:38:
 fa:10:0a:27:da:74:2d:0e:be:53:a2:bd:39:f1:40:d5:
 35:9c:e2:ef:5a:e6:13:7d:5a:10:ab:bf:49:8b:97:d9:
 57:2c:10:e5:3e:35:8a:b5:79:ba:f1:9b:85:4d:34:11:
 bc:3a:fd:4b:49:38:0e:13:c1:a2:20:5c:34:ac:e3:98
 Fingerprint (MD5):
 58:C8:D8:75:3A:81:90:94:C9:06:04:51:52:8E:E7:4B
 Fingerprint (SHA1):
 07:D2:80:8F:05:74:C1:86:43:1F:96:52:1F:A7:B4:4E:BF:61:7F:70
 Certificate Trust Flags:
 SSL Flags:
 Valid CA
 Trusted CA
 User
 Trusted Client CA
 Email Flags:
 User
 Object Signing Flags:
 User

Validate the user Certificates

$ certutil -V -d . -u C -n www
certutil: certificate is valid
$ certutil -V -d . -u C -n testca
certutil: certificate is valid