Thursday, June 16, 2016

Running Jenkins jobs in Docker containers

One of my main tasks at work is to configure Jenkins to act as a hub for all the deployment and automated testing jobs we run. We use CloudBees Jenkins Enterprise, mostly for its Role-Based Access Control plugin, which allows us to create one Jenkins folder per project/application and establish fine grained access control to that folder for groups of users. We also make heavy use of the Jenkins Enterprise Pipeline features (which I think are also available these days in the open source version).

Our Jenkins infrastructure is composed of a master node and several executor nodes which can run jobs in parallel if needed.

One pattern that my colleague Will Wright and I have decided upon is to run all Jenkins jobs as Docker containers. This way, we only need to install Docker Engine on the master node and the executor nodes. No need to install any project-specific pre-requisites or dependencies on every Jenkins node. All of these dependencies and pre-reqs are instead packaged in the Docker containers. It's a simple but powerful idea, that has worked very well for us. One of the nice things about this pattern is that you can keep adding various types of automated tests. If it can run from the command line, then it can run in a Docker container, which means you can run it from Jenkins!

I have seen this pattern discussed in multiple places recently, for example in this blog post about "Using Docker for a more flexible Jenkins".

Here are some examples of Jenkins jobs that we create for a given project/application:
  • a deployment job that runs Capistrano in its own Docker container, against targets in various environments (development, staging, production); this is a Pipeline script written in Groovy, which can call other jobs below
  • a Web UI testing job that runs the Selenium Python WebDriver and drives Firefox in headless mode (see my previous post on how to do this with Docker)
  • a JavaScript syntax checking job that runs JSHint against the application's JS files
  • an SSL scanner/checker that runs SSLyze against the application endpoints
We also run other types of tasks, such as running an AWS CLI command to perform certain actions, for example to invalidate a CloudFront resource. I am going to show here how we create a Docker image for one of these jobs, how we test it locally, and how we then integrate it in Jenkins.

I'll use as an example a simple Docker image that installs the AWS CLI package and runs a command when the container is invoked via 'docker run'.

I assume you have a local version of Docker installed. If you are on a Mac, you can use Docker Toolbox, or, if you are lucky and got access to it, you can use the native Docker for Mac. In any case,  I will assume that you have a local directory called awscli with the following Dockerfile in it:

FROM ubuntu:14.04

MAINTAINER You Yourself <you@example.com>

# disable interactive functions
ARG DEBIAN_FRONTEND=noninteractive
ENV AWS_ACCESS_KEY_ID=""
ENV AWS_SECRET_ACCESS_KEY=""
ENV AWS_COMMAND=""

RUN apt-get update && \
    apt-get install -y python-pip && \
    pip install awscli

WORKDIR /root

CMD (export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID; export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY; $AWS_COMMAND)

As I mentioned, this simply installs the awscli Python package via pip, then runs a command given as an environment variable when you invoke 'docker run'. It also uses two other environment variables that contain the AWS access key ID and secret access key. You don't want to hardcode these secrets in the Dockerfile and have them end up on GitHub.

The next step is to build an image based on this Dockerfile. I'll call the image awscli and I'll tag it as local:

$ docker build -t awscli:local .

Then you can run a container based on this image. The command line looks a bit complicated because I am passing (via the -e switch) the 3 environment variables discussed above:


$ docker run --rm -e AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY -e AWS_COMMAND='aws cloudfront create-invalidation --distribution-id=abcdef --invalidation-batch Paths={Quantity=1,Items=[/media/*]},CallerReference=my-invalidation-123456' awscli:local

(where distribution-id needs to be the actual ID of your CloudFront distribution, and CallerReference needs to be unique per invalidation)

If all goes well, you should see the output of the 'aws cloudfront create-invalidation' command.

In our infrastructure, we have a special GitHub repository where we check in the various folders containing the Dockerfiles and any static files that need to be copied over to the Docker images. When we push the awscli directory to GitHub for example, we have a Jenkins job that will be notified of that commit and that will build the Docker image (similarly to how we did it locally with 'docker build'), then it will 'docker push' the image to a private AWS ECR repository we have.

Now let's assume we want to create a Jenkins job that will run this image as a container. First we define 2 secret credentials, specific to the Jenkins folder where we want to create the job (there are also global Jenkins credentials that can apply to all folders). These credentials are of type "Secret text" and contain the AWS access key ID and the AWS secret access key.

Then we create a new Jenkins job of type "Freestyle project" and call it cloudfront.invalidate. The build for this job is parameterized and contains 2 parameters: CF_ENVIRONMENT which is a drop-down containing the values "Staging" and "Production" referring to the CloudFront distribution we want to invalidate; and CF_RESOURCE, which is a text variable that needs to be set to the resource that needs to be invalidated (e.g. /media/*).

In the Build Environment section of the Jenkins job, we check "Use secret text(s) or file(s)" and add 2 Bindings, one for the first secret text credential containing the AWS access key ID, which we save in a variable called AWS_ACCESS_KEY_ID, and the other one for the second secret text credential containing the AWS secret access key, which we save in a variable called AWS_SECRET_ACCESS_KEY.

The Build section for this Jenkins job has a step of type "Execute shell" which uses the parameters and variables defined above and invokes 'docker run' using the path to the Docker image from our private ECR repository:

DISTRIBUTION_ID=MY_CF_DISTRIBUTION_ID_FOR_STAGING
if [ $CF_ENVIRONMENT == "PRODUCTION" ]; then
    DISTRIBUTION_ID=MY_CF_DISTRIBUTION_ID_FOR_PRODUCTION
fi

INVALIDATION_ID=jenkins-invalidation-`date +%Y%m%d%H%M%S`

COMMAND="aws cloudfront create-invalidation --distribution-id=$DISTRIBUTION_ID --invalidation-batch Paths={Quantity=1,Items=[$CF_RESOURCE]},CallerReference=$INVALIDATION_ID"

docker run --rm -e AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=$SECRET_ACCESS_KEY -e AWS_COMMAND="$COMMAND" MY_PRIVATE_ECR_ID.dkr.ecr.us-west-2.amazonaws.com/awscli


When this job is run, the Docker image gets pulled down from AWS ECR, then a container based on the image is run and then removed upon completion (that's what --rm does, so that no old containers are left around).

I'll write another post soon with some more examples of Jenkins jobs that we run as Docker containers to do Selenium test, JSHint testing and SSLyze scanning.






Thursday, May 26, 2016

Setting up AWS CloudFront for Magento

Here are some steps I jotted down for setting up AWS CloudFront as a CDN for the 3 asset directories that are used by Magento installations. I am assuming your Magento application servers are behind an ELB.


SSL certificate upload to AWS

Install aws command line utilities.

$ pip install awscli

Configure AWS credentials

Create IAM user and associate it with the IAMFullAccess policy. Run ‘aws configure’ and specify the user’s keys and the region.

Bring SSL key, certificate and intermediate certificate in current directory:

-rw-r--r-- 1 root root 4795 Apr 11 20:34 gd_bundle-g2-g1.crt
-rw-r--r-- 1 root root 1830 Apr 11 20:34 wildcard.mydomain.com.crt
-rw------- 1 root root 1675 Apr 11 20:34 wildcard.mydomain.com.key

Run following script for installing wildcard SSL certificate to be used in staging CloudFront setup:

$ cat add_ssl_cert_to_iam_for_prod_cloudfront.sh
#!/bin/bash

aws iam upload-server-certificate --server-certificate-name WILDCARD_MYDOMAIN_COM_FOR_PROD_CF --certificate-body file://wildcard.mydomain.com.crt --private-key file://wildcard.mydomain.com.key --certificate-chain file://gd_bundle-g2-g1.crt --path /cloudfront/prod/


After uploading the SSL certificates, they will be available in drop-downs when configuring CloudFront for SSL.

Apache Cache-Control headers setup
  • Add these directives (modifying max-age accordingly) in all Apache vhosts, both for port 80 and for port 443
 <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$">
        Header set Cache-Control "max-age=604800, public"
 </FilesMatch>

CloudFront setup
  • Origin: prod ELB (mydomain-production-lb-9321962155.us-west-2.elb.amazonaws.com)
  • Alternate domain name: cdn.mydomain.com\
  • SSL certificate: ID_OF_CERTIFICATE_UPLOADED_ABOVE
  • Custom SSL client support: Only Clients that Support Server Name Indication (SNI)
  • Domain name: eg7ac9k0fa3qwc.cloudfront.net
  • Behaviors
    • /media/* /skin/* /js/*
    • Viewer protocol policy: HTTP and HTTPS
    • Allowed HTTP methods: GET, HEAD
    • Forward headers: None
    • Object caching: Use origin cache headers
    • Forward cookies: None
    • Forward query strings: Yes
    • Smooth streaming: No
    • Restrict viewer access: No
    • Compress objects automatically: No

DNS setup
  • cdn.mydomain.com is a CNAME pointing to the CloudFront domain name above eg7ac9k0fa3qwc.cloudfront.net

Magento setup

This depends on the version of Magento you are running (1.x or 2.x), but you want to look for settings for the Base Skin URL, Base Media URL and Base Javascript URL, which are usually under System->Configuration->General-Web. You need to set them to point to the domain name you set up as a CNAME to CloudFront.

Base Skin URL: http://cdn.mydomain.com/skin
Base Media URL: http://cdn.mydomain.com/media
Base Javascript URL: http://cdn.mydomain.com/js

More in-depth Magento-specific instructions for integrating with CloudFront are available here.

Friday, April 15, 2016

LDAP server setup and client authentication

We recently bought at work a CloudBees Jenkins Enterprise license and I wanted to tie the user accounts to a directory service. I first tried to set up Jenkins authentication via the AWS Directory Service, hoping it will be pretty much like talking to an Active Directory server. That proved to be impossible to set up, at least for me. I also tried to have an LDAP proxy server talking to the AWS Directory Service and have Jenkins authenticate against the LDAP proxy. No dice. I ended up setting up a good old-fashioned LDAP server and managed to get Jenkins working with it. Here are some of my notes.

OpenLDAP server setup


I followed this excellent guide from Digital Ocean. The server was an Ubuntu 14.04 EC2 instance in my case. What follows in terms of the server setup is taken almost verbatim from the DO guide.

Set the hostname



# hostnamectl set-hostname my-ldap-server


Edit /etc/hosts and make sure this entry exists:

LOCAL_IP_ADDRESS my-ldap-server.mycompany.com my-ldap-server

(it makes a difference that the FQDN is the first entry in the line above!)

Make sure the following types of names are returned when you run hostname with different options:


# hostname
my-ldap-server

# hostname -f
my-ldap-server.mycompany.com

# hostname -d
mycompany.com

Install slapd



# apt-get install slapd ldap-utils
# dpkg-reconfigure slapd

(here you specify the LDAP admin password)

Install the SSL Components



# apt-get install gnutls-bin ssl-cert

Create the CA Template


# mkdir /etc/ssl/templates
# vi /etc/ssl/templates/ca_server.conf
# cat /etc/ssl/templates/ca_server.conf
cn = LDAP Server CA
ca
cert_signing_key


Create the LDAP Service Template



# vi /etc/ssl/templates/ldap_server.conf
# cat /etc/ssl/templates/ldap_server.conf
organization = "My Company"
cn = my-ldap-server.mycompany.com
tls_www_server
encryption_key
signing_key
expiration_days = 3650


Create the CA Key and Certificate



# certtool -p --outfile /etc/ssl/private/ca_server.key
# certtool -s --load-privkey /etc/ssl/private/ca_server.key --template /etc/ssl/templates/ca_server.conf --outfile /etc/ssl/certs/ca_server.pem

Create the LDAP Service Key and Certificate



# certtool -p --sec-param high --outfile /etc/ssl/private/ldap_server.key
# certtool -c --load-privkey /etc/ssl/private/ldap_server.key --load-ca-certificate /etc/ssl/certs/ca_server.pem --load-ca-privkey /etc/ssl/private/ca_server.key --template /etc/ssl/templates/ldap_server.conf --outfile /etc/ssl/certs/ldap_server.pem


Give OpenLDAP Access to the LDAP Server Key



# usermod -aG ssl-cert openldap
# chown :ssl-cert /etc/ssl/private/ldap_server.key
# chmod 640 /etc/ssl/private/ldap_server.key


Configure OpenLDAP to Use the Certificate and Keys


IMPORTANT NOTE: in modern versions of slapd, configuring the server is not done via slapd.conf anymore. Instead, you put together ldif files and run LDAP client utilities such as ldapmodify against the local server. The Distinguished Name of the entity you want to modify in terms of configuration is generally dn: cn=config but it can also be the LDAP database dn: olcDatabase={1}hdb,cn=config.

# vi addcerts.ldif
# cat addcerts.ldif
dn: cn=config
changetype: modify
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/ssl/certs/ca_server.pem
-
add: olcTLSCertificateFile
olcTLSCertificateFile: /etc/ssl/certs/ldap_server.pem
-
add: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/ssl/private/ldap_server.key


# ldapmodify -H ldapi:// -Y EXTERNAL -f addcerts.ldif
# service slapd force-reload
# cp /etc/ssl/certs/ca_server.pem /etc/ldap/ca_certs.pem
# vi /etc/ldap/ldap.conf

* set TLS_CACERT to following:
TLS_CACERT /etc/ldap/ca_certs.pem

# ldapwhoami -H ldap:// -x -ZZ
Anonymous


Force Connections to Use TLS


Change olcSecurity attribute to include 'tls=1':

# vi forcetls.ldif
# cat forcetls.ldif
dn: olcDatabase={1}hdb,cn=config
changetype: modify
add: olcSecurity
olcSecurity: tls=1


# ldapmodify -H ldapi:// -Y EXTERNAL -f forcetls.ldif
# service slapd force-reload
# ldapsearch -H ldap:// -x -b "dc=mycompany,dc=com" -LLL dn
(shouldn’t work)

# ldapsearch -H ldap:// -x -b "dc=mycompany,dc=com" -LLL -Z dn
(should work)


Disallow anonymous bind


Create user binduser to be used for LDAP searches:


# vi binduser.ldif
# cat binduser.ldif
dn: cn=binduser,dc=mycompany,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: binduser
uid: binduser
uidNumber: 2000
gidNumber: 200
homeDirectory: /home/binduser
loginShell: /bin/bash
gecos: suser
userPassword: {crypt}x
shadowLastChange: -1
shadowMax: -1
shadowWarning: -1


# ldapadd -x -W -D "cn=admin,dc=mycompany,dc=com" -Z -f binduser.ldif
Enter LDAP Password:
adding new entry "cn=binduser,dc=mycompany,dc=com"

Change olcDissalows attribute to include bind_anon:


# vi disallow_anon_bind.ldif
# cat disallow_anon_bind.ldif
dn: cn=config
changetype: modify
add: olcDisallows
olcDisallows: bind_anon


# ldapmodify -H ldapi:// -Y EXTERNAL -ZZ -f disallow_anon_bind.ldif
# service slapd force-reload

Also disable anonymous access to frontend:

# vi disable_anon_frontend.ldif
# cat disable_anon_frontend.ldif
dn: olcDatabase={-1}frontend,cn=config
changetype: modify
add: olcRequires
olcRequires: authc


# ldapmodify -H ldapi:// -Y EXTERNAL -f disable_anon_frontend.ldif
# service slapd force-reload


Create organizational units and users


Create helper scripts:

# cat add_ldap_ldif.sh
#!/bin/bash


LDIF=$1


ldapadd -x -w adminpassword -D "cn=admin,dc=mycompany,dc=com" -Z -f $LDIF


# cat modify_ldap_ldif.sh
#!/bin/bash


LDIF=$1


ldapmodify -x -w adminpassword -D "cn=admin,dc=mycompany,dc=com" -Z -f $LDIF


# cat set_ldap_pass.sh
#!/bin/bash


USER=$1
PASS=$2


ldappasswd -s $PASS -w adminpassword -D "cn=admin,dc=mycompany,dc=com" -x "uid=$USER,ou=users,dc=mycompany,dc=com" -Z

Create ‘mypeople’ organizational unit:


# cat add_ou_mypeople.ldif
dn: ou=mypeople,dc=mycompany,dc=com
objectclass: organizationalunit
ou: users
description: all users

# ./add_ldap_ldif.sh add_ou_mypeople.ldif

Create 'groups' organizational unit:


# cat add_ou_groups.ldif
dn: ou=groups,dc=mycompany,dc=com
objectclass: organizationalunit
ou: groups
description: all groups


# ./add_ldap_ldif.sh add_ou_groups.ldif

Create users (note the shadow attributes set to -1, which means they will be ignored):


# cat add_user_myuser.ldif
dn: uid=myuser,ou=mypeople,dc=mycompany,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: myuser
uid: myuser
uidNumber: 2001
gidNumber: 201
homeDirectory: /home/myuser
loginShell: /bin/bash
gecos: myuser
userPassword: {crypt}x
shadowLastChange: -1
shadowMax: -1
shadowWarning: -1

# ./add_ldap_ldif.sh add_user_myuser.ldif
# ./set_ldap_pass.sh myuser MYPASS


Enable LDAPS


In /etc/default/slapd set:

SLAPD_SERVICES="ldap:/// ldaps:/// ldapi:///"


Enable debugging


This was a life saver when it came to troubleshooting connection issues from clients such as Jenkins or other Linux boxes. To enable full debug output, set olcLogLevel to -1:

# cat enable_debugging.ldif
dn: cn=config
changetype: modify
add: olcLogLevel
olcLogLevel: -1

# ldapadd -H ldapi:// -Y EXTERNAL -f enable_debugging.ldif
# service slapd force-reload


Configuring Jenkins LDAP authentication


Verify LDAPS connectivity from Jenkins to LDAP server


In my case, the Jenkins server is in the same VPC and subnet as the LDAP server, so I added an /etc/hosts entry on the Jenkins box pointing to the FQDN of the LDAP server so it can hit its internal IP address:

IP_ADDRESS_OF_LDAP_SERVER my-ldap-server.mycompany.com

I verified that port 636 (used by LDAPS) on the LDAP server is reachable from the Jenkins server:

# telnet my-ldap-server.mycompany.com 636
Trying IP_ADDRESS_OF_LDAP_SERVER...
Connected to my-ldap-server.mycompany.com.
Escape character is '^]'.

Set up LDAPS client on Jenkins server (StartTLSdoes not work w/ Jenkins LDAP plugin!)


# apt-get install ldap-utils

IMPORTANT: Copy over /etc/ssl/certs/ca_server.pem from LDAP server as /etc/ldap/ca_certs.pem on Jenkins server and then:

# vi /etc/ldap/ldap.conf
set:
TLS_CACERT /etc/ldap/ca_certs.pem

Add LDAP certificates to Java keystore used by Jenkins


As user jenkins:
$ mkdir .keystore
$ cp /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/security/cacerts .keystore/
(you may need to customize the above line in terms of the path to the cacerts file -- it is the one under your JAVA_HOME)

$ keytool --keystore /var/lib/jenkins/.keystore/cacerts --import --alias my-ldap-server.mycompany.com:636 --file /etc/ldap/ca_certs.pem
Enter keystore password: changeit
Owner: CN=LDAP Server CA
Issuer: CN=LDAP Server CA
Serial number: 570bddb0
Valid from: Mon Apr 11 17:24:00 UTC 2016 until: Tue Apr 11 17:24:00 UTC 2017
Certificate fingerprints:
....
Extensions:
....

Trust this certificate? [no]:  yes
Certificate was added to keystore

In /etc/default/jenkins, set JAVA_ARGS to:
JAVA_ARGS="-Djava.awt.headless=true -Djavax.net.ssl.trustStore=/var/lib/jenkins/.keystore/cacerts -Djavax.net.ssl.trustStorePassword=changeit"  

As root, restart jenkins:

# service jenkins restart

Jenkins settings for LDAP plugin


This took me a while to get right. The trick was to set the rootDN to dc=mycompany, dc=com and the userSearchBase to ou=mypeople (or to whatever name you gave to your users' organizational unit). I also tried to get LDAP groups to work but wasn't very successful.

Here is the LDAP section in /var/lib/jenkins/config.xml:
 <securityRealm class="hudson.security.LDAPSecurityRealm" plugin="ldap@1.11">
   <server>ldaps://my-ldap-server.mycompany.com:636</server>
   <rootDN>dc=mycompany,dc=com</rootDN>
   <inhibitInferRootDN>true</inhibitInferRootDN>
   <userSearchBase>ou=mypeople</userSearchBase>
   <userSearch>uid={0}</userSearch>
<groupSearchBase>ou=groups</groupSearchBase> <groupMembershipStrategy class="jenkins.security.plugins.ldap.FromGroupSearchLDAPGroupMembershipStrategy"> <filter>member={0}</filter> </groupMembershipStrategy>
   <managerDN>cn=binduser,dc=mycompany,dc=com</managerDN>
   <managerPasswordSecret>JGeIGFZwjipl6hJNefTzCwClRcLqYWEUNmnXlC3AOXI=</managerPasswordSecret>
   <disableMailAddressResolver>false</disableMailAddressResolver>
   <displayNameAttributeName>displayname</displayNameAttributeName>
   <mailAddressAttributeName>mail</mailAddressAttributeName>
   <userIdStrategy class="jenkins.model.IdStrategy$CaseInsensitive"/>
   <groupIdStrategy class="jenkins.model.IdStrategy$CaseInsensitive"/>

 </securityRealm>


At this point, I was able to create users on the LDAP server and have them log in to Jenkins. With CloudBees Jenkins Enterprise, I was also able to use the Role-Based Access Control and Folder plugins in order to create project-specific folders and folder-specific groups specifying various roles. For example, a folder MyProjectNumber1 would have a Developers group defined inside it, as well as an Administrators group and a Readers group. These groups would be associated with fine-grained roles that only allow certain Jenkins operations for each group.

I tried to have these groups read by Jenkins from the LDAP server, but was unsuccessful. Instead, I had to populate the folder-specific groups in Jenkins with user names that were at least still defined in LDAP.  So that was half a win. Still waiting to see if I can define the groups in LDAP, but for now this is a workaround that works for me.

Allowing users to change their LDAP password


This was again a seemingly easy task but turned out to be pretty complicated. I set up another small EC2 instance to act as a jumpbox for users who want to change their LDAP password.

The jumpbox is in the same VPC and subnet as the LDAP server, so I added an /etc/hosts entry on the jumpbox pointing to the FQDN of the LDAP server so it can hit its internal IP address:

IP_ADDRESS_OF_LDAP_SERVER my-ldap-server.mycompany.com

I verified that port 636 (used by LDAPS) on the LDAP server is reachable from the jumpbox:

# telnet my-ldap-server.mycompany.com 636
Trying IP_ADDRESS_OF_LDAP_SERVER...
Connected to my-ldap-server.mycompany.com.
Escape character is '^]'.

# apt-get install ldap-utils

IMPORTANT: Copy over /etc/ssl/certs/ca_server.pem from LDAP server as /etc/ldap/ca_certs.pem on the jumpbox and then:

# vi /etc/ldap/ldap.conf
set:
TLS_CACERT /etc/ldap/ca_certs.pem

Next, I followed this LDAP Client Authentication guide from the Ubuntu documentation.

# apt-get install ldap-auth-client nscd

Here I had to answer the setup questions on LDAP server FQDN, admin DN and password, and bind user DN and password. 

# auth-client-config -t nss -p lac_ldap

I edited /etc/auth-client-config/profile.d/ldap-auth-config and set:

[lac_ldap]
nss_passwd=passwd: ldap files
nss_group=group: ldap files
nss_shadow=shadow: ldap files
nss_netgroup=netgroup: nis

I edited /etc/ldap.conf and made sure the following entries were there:

base dc=mycompany,dc=com
uri ldaps://my-ldap-server.mycompany.com
binddn cn=binduser,mycompany,dc=com
bindpw BINDUSERPASS
rootbinddn cn=admin,mycompany,dc=com
port 636
ssl on
tls_cacertfile /etc/ldap/ca_certs.pem
tls_cacertdir /etc/ssl/certs

I allowed password-based ssh logins to the jumpbox by editing /etc/ssh/sshd_config and setting:

PasswordAuthentication yes

# service ssh restart


IMPORTANT: On the LDAP server, I had to allow users to change their own password by adding this ACL:

# cat set_userpassword_acl.ldif

dn: olcDatabase={1}hdb,cn=config
changetype: modify
add: olcAccess
olcAccess: {0}to attrs=userpassword by dn="cn=admin,dc=mycompany,dc=com" write by self write by anonymous auth by users none

Then:

# ldapmodify -H ldapi:// -Y EXTERNAL -f set_userpassword_acl.ldif


At this point, users were able to log in via ssh to the jumpbox using a pre-set LDAP password, and change their LDAP password by using the regular Unix 'passwd' command.

I am still fine-tuning the LDAP setup on all fronts: LDAP server, LDAP client jumpbox and Jenkis server. The setup I have so far allows me to have a single sign-on account for users to log in to Jenkins. Some of my next steps is to use the same user LDAP accounts  for authentication and access control into MySQL and other services.