Categories

Openshift- Day 3 – Scaling an application

[student@workstation ~]$ oc get dc
NAME         REVISION   DESIRED   CURRENT   TRIGGERED BY
instructor   0          1         0         config,image(instructor:latest)
[student@workstation ~]$ oc edit dc instructor
Edit cancelled, no changes made.
[student@workstation ~]$

 

[student@workstation ~]$ oc get pod
NAME      READY     STATUS    RESTARTS   AGE
mysqldb   1/1       Running   1          16h
[student@workstation ~]$

[student@workstation ~]$ oc scale dc instructor –replicas=2
deploymentconfig “instructor” scaled

[student@workstation ~]$ oc get dc
NAME         REVISION   DESIRED   CURRENT   TRIGGERED BY
instructor   0          2         0         config,image(instructor:latest)
[student@workstation ~]$

[student@workstation ~]$ oc edit dc instructor
Edit cancelled, no changes made.


autoscale à partir de la version Openshift 3.4


[student@workstation ~]$ oc new-project scaling
Now using project “scaling” on server “https://master.lab.example.com:8443”.

You can add applications to this project with the ‘new-app’ command. For example, try:

oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.
[student@workstation ~]$ oc project instructor
Now using project “instructor” on server “https://master.lab.example.com:8443”.
[student@workstation ~]$ oc get pod
NAME      READY     STATUS    RESTARTS   AGE
mysqldb   1/1       Running   1          16h
[student@workstation ~]$ oc project scaling
Now using project “scaling” on server “https://master.lab.example.com:8443”.
[student@workstation ~]$ oc -o json new-app openshift/php:5.5~http://workstation.lab.example.com/scaling > scaling.json
[student@workstation ~]$

[student@workstation ~]$ vi scaling.json
[student@workstation ~]$ oc create -f scaling.json
imagestream “scaling” created
buildconfig “scaling” created
deploymentconfig “scaling” created
service “scaling” created
[student@workstation ~]$

[student@workstation ~]$ watch oc get builds
[student@workstation ~]$ oc get pods
NAME              READY     STATUS    RESTARTS   AGE
scaling-1-build   1/1       Running   0          46s
[student@workstation ~]$

[student@workstation ~]$ oc get services
NAME      CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
scaling   172.30.217.167   <none>        8080/TCP   1m
[student@workstation ~]$

[student@workstation ~]$ oc expose service scaling –hostname=scaling.cloudapps.lab.example.com
route “scaling” exposed
[student@workstation ~]$

[student@workstation ~]$ oc get pod
NAME              READY     STATUS      RESTARTS   AGE
scaling-1-build   0/1       Completed   0          2m
scaling-1-epgjq   1/1       Running     0          1m
scaling-1-lcudo   1/1       Running     0          12s
scaling-1-r5yu1   1/1       Running     0          17s
[student@workstation ~]$

 

https://master.lab.example.com:8443/console/project/scaling/overview

[student@workstation ~]$ for POD in $(oc get pods |grep Running |cut -f1 -d” “)
> do
> oc describe pod $POD |grep IP
> done
IP:            10.129.0.55
IP:            10.129.0.57
IP:            10.129.0.56
[student@workstation ~]$

[student@workstation ~]$ for i in {1..5}
> do
> curl -s http://scaling.cloudapps.lab.example.com |grep IP
> done
<br/> Server IP: 10.129.0.55
<br/> Server IP: 10.129.0.56
<br/> Server IP: 10.129.0.57
<br/> Server IP: 10.129.0.55
<br/> Server IP: 10.129.0.56
[student@workstation ~]$

[student@workstation ~]$ oc describe dc scaling |grep Replicas
Replicas:    3
Replicas:    3 current / 3 desired
[student@workstation ~]$

[student@workstation ~]$ oc scale –replicas=5 dc scaling
deploymentconfig “scaling” scaled
[student@workstation ~]$ oc get pods
NAME              READY     STATUS              RESTARTS   AGE
scaling-1-build   0/1       Completed           0          8m
scaling-1-epgjq   1/1       Running             0          7m
scaling-1-lcudo   1/1       Running             0          5m
scaling-1-r5yu1   1/1       Running             0          5m
scaling-1-swihf   0/1       ContainerCreating   0          3s
scaling-1-vibir   0/1       ContainerCreating   0          3s
[student@workstation ~]$

[student@workstation ~]$ for i in {1..5}
> do
> curl -s http://scaling.cloudapps.lab.example.com |grep IP
> done
<br/> Server IP: 10.129.0.55
<br/> Server IP: 10.129.0.56
<br/> Server IP: 10.129.0.57
<br/> Server IP: 10.129.0.59
<br/> Server IP: 10.129.0.60
[student@workstation ~]$

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


[root@master ~]# oc get pods -n scaling
NAME              READY     STATUS      RESTARTS   AGE
scaling-1-build   0/1       Completed   0          19m
scaling-1-epgjq   1/1       Running     0          18m
scaling-1-lcudo   1/1       Running     0          17m
scaling-1-r5yu1   1/1       Running     0          17m
scaling-1-swihf   1/1       Running     0          11m
scaling-1-vibir   1/1       Running     0          11m
[root@master ~]#

[root@master ~]# oc get pods -n scaling
NAME              READY     STATUS      RESTARTS   AGE
scaling-1-build   0/1       Completed   0          23m
scaling-1-epgjq   1/1       Running     0          21m
scaling-1-r5yu1   1/1       Running     0          20m
[root@master ~]# oc describe pod scaling -n scaling
Name:            scaling-1-build
Namespace:        scaling
Security Policy:    privileged
Node:            node.lab.example.com/172.25.250.11
Start Time:        Wed, 26 Apr 2017 09:20:58 +0200
Labels:            openshift.io/build.name=scaling-1
Status:            Succeeded


[root@master master]# oc whoami
system:admin
[root@master master]# oc get nodes
NAME                     STATUS                     AGE
master.lab.example.com   Ready,SchedulingDisabled   1d
node.lab.example.com     Ready                      1d
[root@master master]#

SchedulingDisabled = pas de POD pour ce noeud

[root@master master]# oc get nodes –show-labels
NAME                     STATUS                     AGE       LABELS
master.lab.example.com   Ready,SchedulingDisabled   1d        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master.lab.example.com
node.lab.example.com     Ready                      1d        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node.lab.example.com,region=infra
[root@master master]#

[root@master master]# oc label node node.lab.example.com region=EUROPE
error: ‘region’ already has a value (infra), and –overwrite is false
[root@master master]# oc label node node.lab.example.com region=EUROPE –overwrite
node “node.lab.example.com” labeled
[root@master master]#


[student@workstation ~]$ oc login
Authentication required for https://master.lab.example.com:8443 (openshift)
Username: student
Password:
Login successful.

You have access to the following projects and can switch between them with ‘oc project <projectname>’:

instructor
* scaling

Using project “scaling”.
[student@workstation ~]$

[root@master master]# oc get nodes –show-labels
NAME                     STATUS                     AGE       LABELS
master.lab.example.com   Ready,SchedulingDisabled   1d        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master.lab.example.com
node.lab.example.com     Ready                      1d        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node.lab.example.com,region=EUROPE
[root@master master]#


DeploymentConfig — dc

[student@workstation ~]$ oc get dc
NAME      REVISION   DESIRED   CURRENT   TRIGGERED BY
scaling              2                   3                  3         config,image(scaling:latest)
[student@workstation ~]$ oc describe dc scaling
Name:        scaling
Namespace:    scaling
Created:    About an hour ago
Labels:        app=scaling
Annotations:    openshift.io/generated-by=OpenShiftNewApp
Latest Version:    2
Selector:    app=scaling,deploymentconfig=scaling
Replicas:    3
Triggers:    Config, Image(scaling@latest, auto=true)
Strategy:    Rolling
Template:
Labels:    app=scaling
deploymentconfig=scaling
Annotations:    openshift.io/container.scaling.image.entrypoint=[“container-entrypoint”,”/bin/sh”,”-c”,”$STI_SCRIPTS_PATH/usage”]


Rollout –> Déploiement l’applicatif

[student@workstation ~]$ oc get pods
NAME              READY     STATUS    RESTARTS   AGE
scaling-2-4md0c   1/1       Running   0          42m
scaling-2-5q9uc   1/1       Running   0          1h
scaling-2-lb35t   1/1       Running   0          12m

 

[student@workstation ~]$ oc rollout latest scaling
deploymentconfig “scaling” rolled out
[student@workstation ~]$ oc deploy scaling
scaling deployment #4 pending 2 seconds ago
scaling deployment #3 deployed 43 seconds ago – 2 pods
[student@workstation ~]$

[student@workstation ~]$ oc rollout latest scaling
deploymentconfig “scaling” rolled out
[student@workstation ~]$ oc deploy scaling
scaling deployment #4 pending 2 seconds ago
scaling deployment #3 deployed 43 seconds ago – 2 pods
[student@workstation ~]$

[student@workstation ~]$ oc rollout latest scaling
deploymentconfig “scaling” rolled out
[student@workstation ~]$ oc deploy scaling
scaling deployment #4 pending 2 seconds ago
scaling deployment #3 deployed 43 seconds ago – 2 pods
[student@workstation ~]$

[student@workstation ~]$ oc get pods
NAME              READY     STATUS    RESTARTS   AGE
scaling-4-a8stq   1/1       Running   0          1m
scaling-4-zko9i   1/1       Running   0          1m
[student@workstation ~]$

Voir tous les deploiements

[student@workstation ~]$ oc describe dc scaling
Name:        scaling
Namespace:    scaling
Created:    About an hour ago
Labels:        app=scaling
Annotations:    openshift.io/generated-by=OpenShiftNewApp
Latest Version:    4
Selector:    app=scaling,deploymentconfig=scaling
Replicas:    2
Triggers:    Config, Image(scaling@latest, auto=true)
Strategy:    Rolling
Template:
Labels:    app=scaling
deploymentconfig=scaling
Annotations:    openshift.io/container.scaling.image.entrypoint=[“container-entrypoint”,”/bin/sh”,”-c”,”$STI_SCRIPTS_PATH/usage”]
openshift.io/generated-by=OpenShiftNewApp
Containers:
scaling:
Image:            172.30.243.29:5000/scaling/scaling@sha256:e8aa721195ca4653b289c28973b6e1b3222fd48b42655e4417da2d483daf7908
Port:            8080/TCP
Volume Mounts:        <none>
Environment Variables:    <none>
No volumes.

Deployment #4 (latest):
Name:        scaling-4
Created:    about a minute ago
Status:        Complete
Replicas:    2 current / 2 desired
Selector:    app=scaling,deployment=scaling-4,deploymentconfig=scaling
Labels:        app=scaling,openshift.io/deployment-config.name=scaling
Pods Status:    2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Deployment #3:
Created:    2 minutes ago
Status:        Complete
Replicas:    0 current / 0 desired
Deployment #2:
Created:    about an hour ago
Status:        Complete
Replicas:    0 current / 0 desired

Events:
FirstSeen    LastSeen    Count    From                SubobjectPath    Type        Reason                Message
———    ——–    —–    —-                ————-    ——–    ——                ——-
1h        1h        1    {deploymentconfig-controller }            Normal        ReplicationControllerScaled    Scaled replication controller “scaling-1” from 8 to 3
1h        1h        1    {deploymentconfig-controller }            Normal        ReplicationControllerScaled    Scaled replication controller “scaling-1” from 3 to 2
1h        1h        1    {deploymentconfig-controller }            Normal        DeploymentCreated        Created new replication controller “scaling-2” for version 2
51m        51m        1    {deploymentconfig-controller }            Normal        ReplicationControllerScaled    Scaled replication controller “scaling-2” from 2 to 3
22m        22m        1    {deploymentconfig-controller }            Normal        ReplicationControllerScaled    Scaled replication controller “scaling-2” from 3 to 5
7m        7m        1    {deploymentconfig-controller }            Normal        ReplicationControllerScaled    Scaled replication controller “scaling-2” from 5 to 2
2m        2m        1    {deploymentconfig-controller }            Normal        DeploymentCreated        Created new replication controller “scaling-3” for version 3
1m        1m        1    {deploymentconfig-controller }            Normal        DeploymentCreated        Created new replication controller “scaling-4” for version 4
[student@workstation ~]$


rollback

oc rollback scaling –> Revenir la version précédente

ou oc rollback scaling –version=2 –> revenir à une version désirée


[student@workstation ~]$ oc rollback scaling
#5 rolled back to scaling-3
Warning: the following images triggers were disabled: scaling:latest
You can re-enable them with: oc set triggers dc/scaling –auto
[student@workstation ~]$

Pour remettre le lien du dc vers le registry , il faut réactiver le trigger : oc set triggers dc/scaling –auto

[student@workstation ~]$ oc set triggers dc/scaling –auto
deploymentconfig “scaling” updated
[student@workstation ~]$


[student@workstation ~]$ wget http://materials.example.com/do280-ansible.tar.gz
–2017-04-26 11:11:47–  http://materials.example.com/do280-ansible.tar.gz
Resolving materials.example.com (materials.example.com)… 172.25.254.254
Connecting to materials.example.com (materials.example.com)|172.25.254.254|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 10922 (11K) [application/x-gzip]
Saving to: ‘do280-ansible.tar.gz’

100%[=============================================================================================================================>] 10,922      –.-K/s   in 0s

2017-04-26 11:11:47 (326 MB/s) – ‘do280-ansible.tar.gz’ saved [10922/10922]

[student@workstation ~]$ tar xzf do280-ansible.tar.gz
[student@workstation ~]$ ansible-playbook playbook.yml

PLAY [Guided Exercise: Preparing for Installation] *****************************

TASK [setup] *******************************************************************
ok: [master.lab.example.com]
ok: [node.lab.example.com]

TASK [Create /root/.ssh (if necessary)] ****************************************
skipping: [node.lab.example.com]
ok: [master.lab.example.com]

TASK [Copy lab_rsa to /root/.ssh/id_rsa] ***************************************
skipping: [node.lab.example.com]
ok: [master.lab.example.com]

TASK [Copy lab_rsa.pub to /root/.ssh/id_rsa.pub] *******************************
skipping: [node.lab.example.com]
ok: [master.lab.example.com]

TASK [Deploy ssh key to root at all nodes] *************************************
changed: [master.lab.example.com]
changed: [node.lab.example.com]

TASK [Stop and disable firewalld] **********************************************
ok: [master.lab.example.com]
ok: [node.lab.example.com]

TASK [Install docker] **********************************************************
ok: [node.lab.example.com]
ok: [master.lab.example.com]

TASK [Customize /etc/sysconfig/docker] *****************************************
changed: [master.lab.example.com]
changed: [node.lab.example.com]

TASK [Customize /etc/sysconfig/docker-storage-setup] ***************************
ok: [master.lab.example.com]
ok: [node.lab.example.com]

TASK [Verify existence of /dev/docker-vg/docker-pool] **************************
changed: [master.lab.example.com]
changed: [node.lab.example.com]

TASK [Copy /etc/pki/tls/certs/example.com.crt to /etc/pki/ca-trust/source/anchors/] ***
ok: [master.lab.example.com]
ok: [node.lab.example.com]

TASK [Start and enable docker] *************************************************
ok: [master.lab.example.com]
ok: [node.lab.example.com]

RUNNING HANDLER [stop-docker] **************************************************
changed: [master.lab.example.com]

K [Check for OCP Installation (Post Installation part 1)] *******************
skipping: [node.lab.example.com]

TASK [Push /etc/sysconfig/docker again] ****************************************
ok: [node.lab.example.com]

TASK [Exclude OpenShift packages from updates] *********************************
changed: [node.lab.example.com]

PLAY [Guided Exercise: Completing Postinstallation Tasks (part 2)] *************

PLAY [Guided Exercise: Configuring Authentication] *****************************

PLAY [Changes made in Chapter 3] ***********************************************

TASK [setup] *******************************************************************
ok: [workstation.lab.example.com]

TASK [Install atomic-openshift-clients] ****************************************
ok: [workstation.lab.example.com]

TASK [Create /home/student/.kube] **********************************************
ok: [workstation.lab.example.com]

TASK [Populate student oc infomation] ******************************************
changed: [workstation.lab.example.com]
to retry, use: –limit @/home/student/playbook.retry

PLAY RECAP *********************************************************************
master.lab.example.com     : ok=23   changed=11   unreachable=0    failed=1
node.lab.example.com       : ok=20   changed=10   unreachable=0    failed=0
workstation.lab.example.com : ok=4    changed=1    unreachable=0    failed=0

[student@workstation ~]$


[student@workstation ~]$ oc login -u student -p redhat
Login successful.

You have access to the following projects and can switch between them with ‘oc project <projectname>’:

* instructor
scaling

Using project “instructor”.
[student@workstation ~]$ oc new-project version
Now using project “version” on server “https://master.lab.example.com:8443”.

You can add applications to this project with the ‘new-app’ command. For example, try:

oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.
[student@workstation ~]$ oc projects
You have access to the following projects and can switch between them with ‘oc project <projectname>’:

instructor
scaling
* version

Using project “version” on server “https://master.lab.example.com:8443”.
[student@workstation ~]$ oc new-app openshift/php:5.5~http://workstation.lab.example.com/version
–> Found image 2d6fbdf (3 months old) in image stream “openshift/php” under tag “5.5” for “openshift/php:5.5”

Apache 2.4 with PHP 5.5
———————–
Platform for building and running PHP 5.5 applications

Tags: builder, php, php55

* A source build using source code from http://workstation.lab.example.com/version will be created
* The resulting image will be pushed to image stream “version:latest”
* Use ‘start-build’ to trigger a new build
* This image will be deployed in deployment config “version”
* Port 8080/tcp will be load balanced by service “version”
* Other containers can access this service through the hostname “version”

–> Creating resources …
imagestream “version” created
buildconfig “version” created
deploymentconfig “version” created
service “version” created
–> Success
Build scheduled, use ‘oc logs -f bc/version’ to track its progress.
Run ‘oc status’ to view your app.
[student@workstation ~]$

[student@workstation ~]$ oc get dc
NAME      REVISION   DESIRED   CURRENT   TRIGGERED BY
version   0          1         0         config,image(version:latest)
[student@workstation ~]$ oc edit dc version

[student@workstation ~]$ oc edit -o json dc version

“openshift.io/generated-by”: “OpenShiftNewApp”
}
},
“spec”: {
“strategy”: {
“type”: “Rolling”,
“rollingParams”: {
“updatePeriodSeconds”: 1,
“intervalSeconds”: 1,
“timeoutSeconds”: 600,

 


[student@workstation ~]$ watch oc get builds
[student@workstation ~]$ oc get services
NAME      CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
version   172.30.35.127   <none>        8080/TCP   2m
[student@workstation ~]$ oc expose service version –hostname=version.cloudapps.lab.example.com
route “version” exposed
[student@workstation ~]$

[student@workstation ~]$ oc get pods
NAME              READY     STATUS      RESTARTS   AGE
version-1-build   0/1       Completed   0          3m
version-1-vxs26   1/1       Running     0          2m

[student@workstation ~]$ curl http://version.cloudapps.lab.example.com
<html>
<head>
<title>PHP Test</title>
</head>
<body>
<p>Version v1</p>
</body>
</html>
[student@workstation ~]$


[student@workstation ~]$ git config –global user.name “Student user”
[student@workstation ~]$ git config –global user.email student@workstation.lab.example.com
[student@workstation ~]$ git config –global push.default simple
[student@workstation ~]$ cd ~
[student@workstation ~]$

[student@workstation ~]$ git clone http://workstation.lab.example.com/version
Cloning into ‘version’…
remote: Counting objects: 3, done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
[student@workstation ~]$

[student@workstation ~]$ cd version
[student@workstation version]$ ls
index.php
[student@workstation version]$ vi index.php
[student@workstation version]$ git add .
[student@workstation version]$ git commit -m “update to v2”
[master f0517b4] update to v2
1 file changed, 1 insertion(+), 1 deletion(-)
[student@workstation version]$

[student@workstation version]$ git push
Counting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 350 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To http://workstation.lab.example.com/version
2ab3a50..f0517b4  master -> master

 

[student@workstation version]$ cd ~
[student@workstation ~]$ oc start-build version
build “version-2” started
[student@workstation ~]$

[student@workstation ~]$ watch oc status
[student@workstation ~]$

 

 

 

 

 

[student@workstation ~]$ oc get pods
NAME              READY     STATUS      RESTARTS   AGE
version-1-build   0/1       Completed   0          10m
version-2-build   0/1       Completed   0          2m
version-2-kjvvl   1/1       Running     0          1m
[student@workstation ~]$

[student@workstation version]$ curl http://version.cloudapps.lab.example.com
<html>
<head>
<title>PHP Test</title>
</head>
<body>
<p>Version v2 ATOS</p>
</body>
</html>
[student@workstation version]$

[student@workstation version]$ oc rollback version
#4 rolled back to version-2
Warning: the following images triggers were disabled: version:latest
You can re-enable them with: oc set triggers dc/version –auto
[student@workstation version]$ oc get pods
NAME               READY     STATUS      RESTARTS   AGE
version-1-build    0/1       Completed   0          13m
version-2-build    0/1       Completed   0          5m
version-3-build    0/1       Completed   0          1m
version-3-w331b    1/1       Running     0          1m
version-4-deploy   1/1       Running     0          6s
[student@workstation version]$

[student@workstation version]$ oc get pods
NAME              READY     STATUS      RESTARTS   AGE
version-1-build   0/1       Completed   0          13m
version-2-build   0/1       Completed   0          5m
version-3-build   0/1       Completed   0          1m
version-4-irm38   1/1       Running     0          14s
[student@workstation version]$

[student@workstation version]$ curl http://version.cloudapps.lab.example.com
<html>
<head>
<title>PHP Test</title>
</head>
<body>
<p>Version v1</p>
</body>
</html>
[student@workstation version]$

[student@workstation version]$ oc set triggers dc/version –auto
deploymentconfig “version” updated
[student@workstation version]$


[student@workstation version]$ vi index.php
[student@workstation version]$ git add .
[student@workstation version]$ git commit -m “update to v3”
[master 4343620] update to v3
1 file changed, 1 insertion(+), 1 deletion(-)
[student@workstation version]$ git push
Counting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 297 bytes | 0 bytes/s, done.
Total 3 (delta 1), reused 0 (delta 0)
To http://workstation.lab.example.com/version
f0517b4..4343620  master -> master
[student@workstation version]$ cd ~
[student@workstation ~]$

[student@workstation ~]$ oc rollout latest version
–Lancement en manuel si le trigger n’est aps remise en auto

deploymentconfig “version” rolled out
[student@workstation ~]$ curl http://version.cloudapps.lab.example.com
<html>
<head>
<title>PHP Test</title>
</head>
<body>
<p>Version v2 ATOS</p>
</body>
</html>
[student@workstation ~]$ oc get pods
NAME              READY     STATUS      RESTARTS   AGE
version-1-build   0/1       Completed   0          18m
version-2-build   0/1       Completed   0          10m
version-3-build   0/1       Completed   0          6m
version-4-build   0/1       Completed   0          34s
version-6-tee4y   1/1       Running     0          9s
[student@workstation ~]$ curl http://version.cloudapps.lab.example.com
<html>
<head>
<title>PHP Test</title>
</head>
<body>
<p>Version v3 ATOS NEW</p>
</body>
</html>
[student@workstation ~]$


[student@workstation version]$ cd ..
[student@workstation ~]$ tar xvzf do280-ansible.tar.gz
README-ansible.txt
files/jboss-image-streams.json
files/student-oc-config
files/image-streams-rhel7.json
playbook.yml
.vimrc
files/installer.cfg.yml
files/master-config.yaml
ansible.cfg
files/
files/correct-registry.sh
files/docker
inventory
[student@workstation ~]$ more playbook.yml

– name: “Guided Exercise: Preparing for Installation”
hosts: nodes
remote_user: root
vars:
local_docker: “workstation.lab.example.com:5000”
docker_cert: /etc/pki/tls/certs/example.com.crt
tasks:
– block:
– name: Create /root/.ssh (if necessary)
file:
path: /root/.ssh
mode: 0700
owner: root
group: root
state: directory
when: inventory_hostname in groups[‘masters’]

– name: Copy lab_rsa to /root/.ssh/id_rsa
copy:
src: /home/student/.ssh/lab_rsa
dest: /root/.ssh/id_rsa
mode: 0600
owner: root
group: root
force: no
when: inventory_hostname in groups[‘masters’]

– name: Copy lab_rsa.pub to /root/.ssh/id_rsa.pub
copy:
src: /home/student/.ssh/lab_rsa.pub
dest: /root/.ssh/id_rsa.pub
mode: 0644
owner: root
group: root
force: no
when: inventory_hostname in groups[‘masters’]

– name: Deploy ssh key to root at all nodes
authorized_key:
user: root
key: “{{ lookup(‘file’, ‘/home/student/.ssh/lab_rsa.pub’) }}”

always:
– name: Stop and disable firewalld
service:
name: firewalld
state: stopped
enabled: false

tags:
– install_prep
– install_and_fetch
– install_ocp

# The docker_cert variable is set at the beginning of this play and
# it should point to the same file specified in the “certificate”
# line of /etc/docker-distribution/registry/config.yml.
– block:
– name: Install docker
yum:
name: docker
state: latest

– name: Customize /etc/sysconfig/docker
copy:
src: files/docker
dest: /etc/sysconfig/docker
notify:
– stop-docker
– remove-docker-files
– start-docker

– name: Customize /etc/sysconfig/docker-storage-setup
copy:
src: /home/student/DO280/labs/install-preparing/docker-storage-setup
dest: /etc/sysconfig/docker-storage-setup

– name: Verify existence of /dev/docker-vg/docker-pool
command: “/usr/bin/ls /dev/docker-vg/docker-pool”

rescue:
– name: Run docker-storage-setup
command: /usr/bin/docker-storage-setup

always:
– name: Copy {{ docker_cert }} to /etc/pki/ca-trust/source/anchors/
copy:
src: “{{ docker_cert }}”
dest: /etc/pki/ca-trust/source/anchors/
notify:
– update-ca-trust extract

– name: Start and enable docker
service:
name: docker
state: started
enabled: true

tags:
– install_prep
– install_and_fetch
– install_ocp

handlers:
– name: stop-docker
service:
name: docker
state: stopped

– name: remove-docker-files
# These files cannot be removed if docker.service is running.
command: “/usr/bin/rm -rf /var/lib/docker/*”

– name: start-docker
service:
name: docker
state: started

– name: update-ca-trust extract
command: “/usr/bin/update-ca-trust extract”

#################################################################

– name: “Guided Exercise: Installing Packages and Fetching Images”
hosts: nodes
remote_user: root
tasks:
# The docker-python package isn’t required for class, but it is necessary
# in order to use the docker_image module.
– name: Install required packages
yum:
name: “{{ item }}”
state: latest
with_items:
– atomic-openshift-docker-excluder
– atomic-openshift-excluder
– atomic-openshift-utils
– bind-utils
– bridge-utils
– docker-python
– git
– iptables-services
– net-tools
– wget
tags:
– install_and_fetch
– install_ocp

– name: “Pull docker images from workstation.lab.example.com:5000”
docker_image:
name: “{{ item }}”
pull: true
with_items:
– “workstation.lab.example.com:5000/openshift3/ose-haproxy-router:v3.4.0.39”
– “workstation.lab.example.com:5000/openshift3/ose-deployer:v3.4.0.39”
– “workstation.lab.example.com:5000/openshift3/ose-sti-builder:v3.4.0.39”
– “workstation.lab.example.com:5000/openshift3/ose-pod:v3.4.0.39”
– “workstation.lab.example.com:5000/openshift3/ose-docker-registry:v3.4.0.39”
– “workstation.lab.example.com:5000/openshift3/ose-docker-builder:v3.4.0.39”
– “workstation.lab.example.com:5000/openshift3/php-55-rhel7:latest”
#- “workstation.lab.example.com:5000/openshift3/php-55-rhel7:5.5-47”
– “workstation.lab.example.com:5000/openshift3/nodejs-010-rhel7:latest”
#- “workstation.lab.example.com:5000/openshift3/nodejs-010-rhel7:0.10-47”
– “workstation.lab.example.com:5000/openshift3/mysql-55-rhel7:latest”
#- “workstation.lab.example.com:5000/openshift3/mysql-55-rhel7:5.5-35”
– “workstation.lab.example.com:5000/jboss-eap-6/eap64-openshift:1.4”
– “workstation.lab.example.com:5000/jboss-eap-7/eap70-openshift:1.4”
– “workstation.lab.example.com:5000/openshift/hello-openshift:latest”
– “workstation.lab.example.com:5000/openshift3/registry-console:3.3”
tags:
– install_and_fetch
– install_ocp

#################################################################

– name: “Guided Exercise: Running the Installer”
hosts: nodes
remote_user: root
tasks:
– name: Create /root/.config/openshift/
file:
path: /root/.config/openshift
owner: root
group: root
mode: 0755
state: directory
recurse: yes
when: inventory_hostname in groups[‘masters’]
tags:
– install_ocp

– name: Populate /root/.config/openshift/installer.cfg.yml
copy:
src: files/installer.cfg.yml
dest: /root/.config/openshift/installer.cfg.yml
when: inventory_hostname in groups[‘masters’]
tags:
– install_ocp

– name: Modify the template for registry-console.yaml
lineinfile:
dest: /usr/share/ansible/openshift-ansible/roles/openshift_hosted_templates/files/v1.4/enterprise/registry-console.yaml
regexp: ‘    value: “registry.access.redhat.com/openshift3/”‘
line: ‘    value: “workstation.lab.example.com:5000/openshift3/”‘
backup: yes
when: inventory_hostname in groups[‘masters’]
tags:
– install_ocp

# When running atomic-openshift-installer, if this step
# hasn’t been completed, the installer will indicate that it
# is unable to install some required packages. The packages do
# exist, but they have been excluded.
– name: Remove OpenShift package exclusions
command: “/usr/sbin/atomic-openshift-excluder unexclude”
tags:
– install_ocp

– name: Run atomic-openshift-installer
command: “/usr/bin/atomic-openshift-installer -u -c /root/.config/openshift/installer.cfg.yml install”
when: inventory_hostname in groups[‘masters’]
tags:
– install_ocp

#################################################################

– name: “Guided Exercise: Completing Postinstallation Tasks (part 1)”
hosts: nodes
remote_user: root
tasks:
– block:
– name: Check for OCP Installation (Post Installation part 1)
service:
name: atomic-openshift-master
state: started
enabled: true
when: inventory_hostname in groups[‘masters’]

– name: Push /etc/sysconfig/docker again
copy:
src: files/docker
dest: /etc/sysconfig/docker
notify: restart-docker

– name: Exclude OpenShift packages from updates
command: “/usr/sbin/atomic-openshift-excluder exclude”

rescue:
– name: Failed OCP Install Check (Post Installation)
debug:
msg: “OpenShift Container Platform needs to be installed first. You may need to reset your master and node hosts and then run: ansible-playbook playbook –tags
‘install_ocp'”

tags:
– post_install
– config_auth

handlers:
– name: restart-docker
service:
name: docker
state: restarted

#################################################################

– name: “Guided Exercise: Completing Postinstallation Tasks (part 2)”
hosts: masters
remote_user: root
tasks:
– block:
– name: Check for OCP Installation (Post Installation part 2)
service:
name: atomic-openshift-master
state: started
enabled: true

# Something happens during this automated process where the pod
# for the registry-console doesn’t come up. To deal with this
# problem I created correct-registry.sh to see if the
# registry-console pod is stuck. If it is stuck, the script
# will delete the existing registry-console pods and recreate
# them by modifying the registry-console deployment config.
– name: Create /root/bin (if necessary)
file:
path: /root/bin
mode: 0755
owner: root
group: root
state: directory

– name: Copy correct-registry.sh script
copy:
src: files/correct-registry.sh
dest: /root/bin/
mode: 0755
owner: root
group: root

– name: Run /root/bin/correct-registry.sh
shell: /root/bin/correct-registry.sh

– name: Edit RHEL7 Image Streams
copy:
src: files/image-streams-rhel7.json
dest: /usr/share/openshift/examples/image-streams/image-streams-rhel7.json
notify:
– delete_openshift_is
– create_rhel7_is
– create_jboss_is

– name: Edit JBoss Image Streams
copy:
src: files/jboss-image-streams.json
dest: /usr/share/openshift/examples/xpaas-streams/jboss-image-streams.json
notify:
– delete_openshift_is
– create_rhel7_is
– create_jboss_is

rescue:
– name: Failed OCP Install Check (Post Installation)
debug:
msg: “OpenShift Container Platform needs to be installed first. You may need to reset your master and node hosts and then run: ansible-playbook playbook –tags
‘install_ocp'”

tags:
– post_install
– config_auth

handlers:
– name: delete_openshift_is
command: “/usr/bin/oc delete is -n openshift –all”

– name: create_rhel7_is
command: “/usr/bin/oc create -f /usr/share/openshift/examples/image-streams/image-streams-rhel7.json -n openshift”

– name: create_jboss_is
command: “/usr/bin/oc create -f /usr/share/openshift/examples/xpaas-streams/jboss-image-streams.json -n openshift”

#################################################################

– name: “Guided Exercise: Configuring Authentication”
hosts: masters
remote_user: root
vars:
passwd_file: /etc/origin/openshift-passwd
users:
– name: developer
password: openshift
– name: student
password: redhat
– name: testuser
password: redhat
tasks:
– block:
– name: Check for OCP Installation (Configure Authentication)
service:
name: atomic-openshift-master
state: started
enabled: true

– name: Install httpd-tools
yum:
name: httpd-tools
state: latest

– name: Allow oc and web access for users
htpasswd:
path: “{{ passwd_file }}”
name: “{{ item.name }}”
password: “{{ item.password }}”
state: present
create: yes
with_items: “{{ users }}”

– name: Change master-config.yaml to use HTPasswdPasswordIdentifyProvider
copy:
src: files/master-config.yaml
dest: /etc/origin/master/master-config.yaml
notify: restart_openshift_master

rescue:
– name: Failed OCP Install Check (Configure Authentication)
debug:
msg: “OpenShift Container Platform needs to be installed first. You may need to reset your master and node hosts and then run: ansible-playbook playbook –tags
‘install_ocp'”

tags:
– config_auth

handlers:
– name: restart_openshift_master
service:
name: atomic-openshift-master
state: restarted

#################################################################

– name: Changes made in Chapter 3
hosts: workstation.lab.example.com
remote_user: root
tasks:
– name: Install atomic-openshift-clients
yum:
name: atomic-openshift-clients
state: latest
tags:
– ch03

– name: Create /home/student/.kube
file:
path: /home/student/.kube
state: directory
owner: student
group: student
mode: 0755
tags:
– ch03

– name: Populate student oc infomation
copy:
src: files/student-oc-config
dest: /home/student/.kube/config
owner: student
group: student
mode: 0600
tags:
– ch03

#################################################################
[student@workstation ~]$

[student@workstation ~]$ cat ansible.cfg
[defaults]
inventory = inventory
remote_user = root

[privilege_escalation]
become=True
become_method=su
become_user=root
become_ask_pass=False
[student@workstation ~]$

[student@workstation ~]$ cd files
[student@workstation files]$ ls
correct-registry.sh  docker  image-streams-rhel7.json  installer.cfg.yml  jboss-image-streams.json  master-config.yaml  student-oc-config
[student@workstation files]$ more correct-registry.sh
#!/bin/bash
TEMP_DC=/tmp/dc-registry-console.json

# Check to see if the docker-registry is running
# If it hasn’t moved to a status of running, give
# the pods 60 seconds to settle.
oc get pods | grep docker-registry | grep -q Running
if [ $? -ne 0 ]; then
sleep 60
fi

# Check to see if there was an error pulling the registry-console image.
oc get pods | grep ^registry-console | grep -q -E “ErrImagePull|ImagePullBackOff|Error”
if [ $? -eq 0 ]; then
# Delete the existing registry-console pods
for POD in $(oc get pods | grep ^registry-console | cut -f1 -d” “); do
#echo “Running: oc delete pod $POD”
oc delete pod $POD
done

# Export the deployment config for the registry-console
oc export dc registry-console -o json > ${TEMP_DC}

# Provided the export worked successfully…
if [ -f “${TEMP_DC}” ]; then
# Replace registry.access.redhat.com with workstation.lab.example.com:5000
sed -i ‘s/registry.access.redhat.com/workstation.lab.example.com:5000/’ ${TEMP_DC}

# This could be used if something besides registry.access.redhat.com was used.
#sed -i ‘s#^\([[:space:]]\+”image”: “\).*\(/openshift3/registry-console:3.3″,\)$#\1workstation.lab.example.com:5000\2#’ ${TEMP_DC}

RECREATE=”false”
until [ $RECREATE = “true” ]; do
oc get dc | grep -q ^registry-console
if [ $? -eq 0 ]; then
#echo “Running: oc delete dc registry-console”
oc delete dc registry-console
sleep 2
else
RECREATE=”true”
fi
done

# Create a new deployment config for the registry-console
# This should trigger a new deployment of the registry-console pods.
#echo “Running: oc create -f ${TEMP_DC}”
oc create -f ${TEMP_DC}
fi
fi
[student@workstation files]$

[student@workstation files]$ more installer.cfg.yml
ansible_callback_facts_yaml: /root/.config/openshift/.ansible/callback_facts.yaml
ansible_inventory_path: /root/.config/openshift/hosts
ansible_log_path: /tmp/ansible.log
deployment:
ansible_ssh_user: root
hosts:
– connect_to: master.lab.example.com
hostname: master.lab.example.com
ip: 172.25.250.10
public_hostname: master.lab.example.com
public_ip: 172.25.250.10
roles:
– master
– etcd
– node
– storage
– connect_to: node.lab.example.com
hostname: node.lab.example.com
ip: 172.25.250.11
node_labels: ‘{”region”: ”infra”}’
public_hostname: node.lab.example.com
public_ip: 172.25.250.11
roles:
– node
master_routingconfig_subdomain: cloudapps.lab.example.com
openshift_master_cluster_hostname: None
openshift_master_cluster_public_hostname: None
proxy_exclude_hosts: ”
proxy_http: ”
proxy_https: ”
roles:
etcd: {}
master: {}
node: {}
storage: {}
variant: openshift-enterprise
variant_version: ‘3.4’
version: v2
[student@workstation files]$


[student@workstation files]$ oc get pods
NAME              READY     STATUS      RESTARTS   AGE
version-1-build   0/1       Completed   0          1h
version-2-build   0/1       Completed   0          1h
version-3-build   0/1       Completed   0          1h
version-4-build   0/1       Completed   0          1h
version-6-tee4y   1/1       Running     0          1h
[student@workstation files]$ oc exec -ti version-6-tee4y bash
bash-4.2$ ls /usr/libexec/s2i
assemble  run  usage
bash-4.2$ more /usr/libexec/s2i/assemble
#!/bin/bash

set -e

shopt -s dotglob
echo “—> Installing application source...”
mv /tmp/src/* ./

if [ -f composer.json ]; then
echo “Found ‘composer.json’, installing dependencies using composer.phar… ”

# Install Composer
curl https://getcomposer.org/installer | php

# Change the repo mirror if provided
if [ -n “$COMPOSER_MIRROR” ]; then
./composer.phar config -g repositories.packagist composer $COMPOSER_MIRROR
fi

# Install App dependencies using Composer
./composer.phar install –no-interaction –no-ansi –optimize-autoloader

if [ ! -f composer.lock ]; then
echo -e “\nConsider adding a ‘composer.lock’ file into your source repository.\n”
fi
fi

# Fix source directory permissions
fix-permissions ./
bash-4.2$

bash-4.2$ more usage
#!/bin/sh

DISTRO=`cat /etc/*-release | grep ^ID= | grep -Po ‘”.*?”‘ | tr -d ‘”‘`

cat <<EOF
This is a S2I PHP-5.5 ${DISTRO} base image:
To use it, install S2I: https://github.com/openshift/source-to-image

Sample invocation:

s2i build https://github.com/sclorg/s2i-php-container.git –context-dir=/5.5/test/test-app/ openshift/php-55-${DISTRO}7 php-test-app

You can then run the resulting image via:
docker run -p 8080:8080 php-test-app
EOF
bash-4.2$ more run   –> Quand le POD démarre
#!/bin/bash

export_vars=$(cgroup-limits); export $export_vars
export DOCUMENTROOT=${DOCUMENTROOT:-/}

# Default php.ini configuration values, all taken
# from php defaults.
export ERROR_REPORTING=${ERROR_REPORTING:-E_ALL & ~E_NOTICE}
export DISPLAY_ERRORS=${DISPLAY_ERRORS:-ON}
export DISPLAY_STARTUP_ERRORS=${DISPLAY_STARTUP_ERRORS:-OFF}
export TRACK_ERRORS=${TRACK_ERRORS:-OFF}
export HTML_ERRORS=${HTML_ERRORS:-ON}
export INCLUDE_PATH=${INCLUDE_PATH:-.:/opt/app-root/src:/opt/rh/php55/root/usr/share/pear}
export SESSION_PATH=${SESSION_PATH:-/tmp/sessions}
export SHORT_OPEN_TAG=${SHORT_OPEN_TAG:-ON}
# TODO should be dynamically calculated based on container memory limit/16
export OPCACHE_MEMORY_CONSUMPTION=${OPCACHE_MEMORY_CONSUMPTION:-16M}

export OPCACHE_REVALIDATE_FREQ=${OPCACHE_REVALIDATE_FREQ:-2}

export PHPRC=${PHPRC:-/opt/rh/php55/root/etc/php.ini}
export PHP_INI_SCAN_DIR=${PHP_INI_SCAN_DIR:-/opt/rh/php55/root/etc/php.d}

envsubst < /opt/app-root/etc/php.ini.template > /opt/rh/php55/root/etc/php.ini
envsubst < /opt/app-root/etc/php.d/opcache.ini.template > /opt/rh/php55/root/etc/php.d/opcache.ini

export HTTPD_START_SERVERS=${HTTPD_START_SERVERS:-8}
export HTTPD_MAX_SPARE_SERVERS=$((HTTPD_START_SERVERS+10))

if [ -n “${NO_MEMORY_LIMIT:-}” -o -z “${MEMORY_LIMIT_IN_BYTES:-}” ]; then
#
export HTTPD_MAX_REQUEST_WORKERS=${HTTPD_MAX_REQUEST_WORKERS:-256}
else
# A simple calculation for MaxRequestWorkers would be: Total Memory / Size Per Apache process.
# The total memory is determined from the Cgroups and the average size for the  # Apache process is estimated to 15MB.
max_clients_computed=$((MEMORY_LIMIT_IN_BYTES/1024/1024/15))
# The MaxClients should never be lower than StartServers, which is set to 5.
# In case the container has memory limit set to <64M we pin the MaxClients to 4.
[[ $max_clients_computed -le 4 ]] && max_clients_computed=4
export HTTPD_MAX_REQUEST_WORKERS=${HTTPD_MAX_REQUEST_WORKERS:-$max_clients_computed}
echo “-> Cgroups memory limit is set, using HTTPD_MAX_REQUEST_WORKERS=${HTTPD_MAX_REQUEST_WORKERS}”
fi

envsubst < /opt/app-root/etc/conf.d/50-mpm-tuning.conf.template > /opt/app-root/etc/conf.d/50-mpm-tuning.conf
envsubst < /opt/app-root/etc/conf.d/00-documentroot.conf.template > /opt/app-root/etc/conf.d/00-documentroot.conf

exec httpd -D FOREGROUND
bash-4.2$

[student@workstation files]$ oc logs bc/version
Cloning “http://workstation.lab.example.com/version” …
Commit:    43436207bff89a91f104e55e238c87fccbdad964 (update to v3)
Author:    Student user <student@workstation.lab.example.com>
Date:    Wed Apr 26 11:32:10 2017 +0200
—> Installing application source…
Pushing image 172.30.243.29:5000/version/version:latest …
Pushed 4/5 layers, 84% complete
Pushed 5/5 layers, 100% complete
Push successful
[student@workstation files]$


TEMPLATE

[student@workstation files]$ oc get templates -n openshift |grep mysql
cakephp-mysql-example                           An example CakePHP application with a MySQL database. For more information ab…   19 (4 blank)      7
dancer-mysql-example                            An example Dancer application with a MySQL database. For more information abo…   18 (5 blank)      7
datagrid65-mysql                                Application template for JDG 6.5 and MySQL applications.                           32 (19 blank)     9
datagrid65-mysql-persistent                     Application template for JDG 6.5 and MySQL applications with persistent storage.   33 (19 blank)     10
eap64-mysql-persistent-s2i                      Application template for EAP 6 MySQL applications with persistent storage bui…   37 (17 blank)     10
eap64-mysql-s2i                                 Application template for EAP 6 MySQL applications built using S2I.                 36 (17 blank)     9
eap70-mysql-persistent-s2i                      Application template for EAP 7 MySQL applications with persistent storage bui…   37 (17 blank)     10
eap70-mysql-s2i                                 Application template for EAP 7 MySQL applications built using S2I.                 36 (17 blank)     9
jws30-tomcat7-mysql-persistent-s2i              Application template for JWS MySQL applications with persistent storage built…   28 (11 blank)     10
jws30-tomcat7-mysql-s2i                         Application template for JWS MySQL applications built using S2I.                   27 (11 blank)     9
jws30-tomcat8-mysql-persistent-s2i              Application template for JWS MySQL applications with persistent storage built…   28 (11 blank)     10
jws30-tomcat8-mysql-s2i                         Application template for JWS MySQL applications built using S2I.                   27 (11 blank)     9
mysql-ephemeral                                 MySQL database service, without persistent storage. For more information abou…   7 (2 generated)   2
mysql-persistent                                MySQL database service, with persistent storage. For more information about u…   8 (2 generated)   3
processserver63-amq-mysql-persistent-s2i        Application template for Red Hat JBoss BPM Suite 6.3 intelligent process serv…   45 (11 blank)     13
processserver63-amq-mysql-s2i                   Application template for Red Hat JBoss BPM Suite 6.3 intelligent process serv…   43 (11 blank)     11
processserver63-mysql-persistent-s2i            Application template for Red Hat JBoss BPM Suite 6.3 intelligent process serv…   36 (12 blank)     10
processserver63-mysql-s2i                       Application template for Red Hat JBoss BPM Suite 6.3 intelligent process serv…   35 (12 blank)     9
sso70-mysql                                     Application template for SSO 7.0 MySQL applications                                35 (20 blank)     7
sso70-mysql-persistent                          Application template for SSO 7.0 MySQL applications with persistent storage        36 (20 blank)     8
[student@workstation files]$ oc export templates mysql-persistent -n openshift -o json > temp.json
[student@workstation files]$

[student@workstation files]$ cat temp.json
{
“kind”: “Template”,
“apiVersion”: “v1”,
“metadata”: {
“name”: “mysql-persistent”,
“creationTimestamp”: null,
“annotations”: {
“description”: “MySQL database service, with persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/5.6/README.md.\n\nNOTE: Scaling to more than one replica is not supported. You must have persistent volumes available in your cluster to use this template.”,
“iconClass”: “icon-mysql-database”,
“openshift.io/display-name”: “MySQL (Persistent)”,
“tags”: “database,mysql”
}
},
“message”: “The following service(s) have been created in your project: ${DATABASE_SERVICE_NAME}.\n\n       Username: ${MYSQL_USER}\n       Password: ${MYSQL_PASSWORD}\n  Database Name: ${MYSQL_DATABASE}\n Connection URL: mysql://${DATABASE_SERVICE_NAME}:3306/\n\nFor more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/5.6/README.md.”,
“objects”: [
{
“apiVersion”: “v1”,
“kind”: “Service”,
“metadata”: {
“name”: “${DATABASE_SERVICE_NAME}”
},
“spec”: {
“ports”: [
{
“name”: “mysql”,
“port”: 3306
}
],
“selector”: {
“name”: “${DATABASE_SERVICE_NAME}”
}
}
},
{
“apiVersion”: “v1”,
“kind”: “PersistentVolumeClaim”,
“metadata”: {
“name”: “${DATABASE_SERVICE_NAME}”
},
“spec”: {
“accessModes”: [
“ReadWriteOnce”
],
“resources”: {
“requests”: {
“storage”: “${VOLUME_CAPACITY}”
}
}
}
},
{
“apiVersion”: “v1”,
“kind”: “DeploymentConfig”,
“metadata”: {
“name”: “${DATABASE_SERVICE_NAME}”
},
“spec”: {
“replicas”: 1,
“selector”: {
“name”: “${DATABASE_SERVICE_NAME}”
},
“strategy”: {
“type”: “Recreate”
},
“template”: {
“metadata”: {
“labels”: {
“name”: “${DATABASE_SERVICE_NAME}”
}
},
“spec”: {
“containers”: [
{
“env”: [
{
“name”: “MYSQL_USER”,
“value”: “${MYSQL_USER}”
},
{
“name”: “MYSQL_PASSWORD”,
“value”: “${MYSQL_PASSWORD}”
},
{
“name”: “MYSQL_DATABASE”,
“value”: “${MYSQL_DATABASE}”
}
],
“image”: ” “,
“imagePullPolicy”: “IfNotPresent”,
“livenessProbe”: {
“initialDelaySeconds”: 30,
“tcpSocket”: {
“port”: 3306
},
“timeoutSeconds”: 1
},
“name”: “mysql”,
“ports”: [
{
“containerPort”: 3306
}
],
“readinessProbe”: {
“exec”: {
“command”: [
“/bin/sh”,
“-i”,
“-c”,
“MYSQL_PWD=\”$MYSQL_PASSWORD\” mysql -h 127.0.0.1 -u $MYSQL_USER -D $MYSQL_DATABASE -e ‘SELECT 1′”
]
},
“initialDelaySeconds”: 5,
“timeoutSeconds”: 1
},
“resources”: {
“limits”: {
“memory”: “${MEMORY_LIMIT}”
}
},
“volumeMounts”: [
{
“mountPath”: “/var/lib/mysql/data”,
“name”: “${DATABASE_SERVICE_NAME}-data”
}
]
}
],
“volumes”: [
{
“name”: “${DATABASE_SERVICE_NAME}-data”,
“persistentVolumeClaim”: {
“claimName”: “${DATABASE_SERVICE_NAME}”
}
}
]
}
},
“triggers”: [
{
“imageChangeParams”: {
“automatic”: true,
“containerNames”: [
“mysql”
],
“from”: {
“kind”: “ImageStreamTag”,
“name”: “mysql:${MYSQL_VERSION}”,
“namespace”: “${NAMESPACE}”
}
},
“type”: “ImageChange”
},
{
“type”: “ConfigChange”
}
]
}
}
],
“parameters”: [
{
“name”: “MEMORY_LIMIT”,
“displayName”: “Memory Limit”,
“description”: “Maximum amount of memory the container can use.”,
“value”: “512Mi”,
“required”: true
},
{
“name”: “NAMESPACE”,
“displayName”: “Namespace”,
“description”: “The OpenShift Namespace where the ImageStream resides.”,
“value”: “openshift”
},
{
“name”: “DATABASE_SERVICE_NAME”,
“displayName”: “Database Service Name”,
“description”: “The name of the OpenShift Service exposed for the database.”,
“value”: “mysql”,
“required”: true
},
{
“name”: “MYSQL_USER”,
“displayName”: “MySQL Connection Username”,
“description”: “Username for MySQL user that will be used for accessing the database.”,
“generate”: “expression”,
“from”: “user[A-Z0-9]{3}”,
“required”: true
},
{
“name”: “MYSQL_PASSWORD”,
“displayName”: “MySQL Connection Password”,
“description”: “Password for the MySQL connection user.”,
“generate”: “expression”,
“from”: “[a-zA-Z0-9]{16}”,
“required”: true
},
{
“name”: “MYSQL_DATABASE”,
“displayName”: “MySQL Database Name”,
“description”: “Name of the MySQL database accessed.”,
“value”: “sampledb”,
“required”: true
},
{
“name”: “VOLUME_CAPACITY”,
“displayName”: “Volume Capacity”,
“description”: “Volume space available for data, e.g. 512Mi, 2Gi.”,
“value”: “1Gi”,
“required”: true
},
{
“name”: “MYSQL_VERSION”,
“displayName”: “Version of MySQL Image”,
“description”: “Version of MySQL image to be used (5.5, 5.6 or latest).”,
“value”: “5.6”,
“required”: true
}
],
“labels”: {
“template”: “mysql-persistent-template”
}
}
[student@workstation files]$


[root@master DO280]# cd labs
[root@master labs]# ls
build-ug        deploy-registry  deploy-service   fetch         install-preparing       managing-limiting  test-install-demo
customizing-go  deploy-route     deploy-template  install       install-preparing-demo  php-custom
deploy-pod      deploy-s2i       deploy-volume    install-demo  lab-review              post-install
[root@master labs]# cd customizing-go/
[root@master customizing-go]# ls
go-is.json  go-rhel7.tar.gz  go-template.json
[root@master customizing-go]# ls -ltr
total 507604
-rw-rw-r–. 1 root root      7057 Mar  4 12:45 go-template.json
-rw-rw-r–. 1 root root       306 Mar  4 12:45 go-is.json
-rw-r–r–. 1 root root 519770624 Mar 21 07:34 go-rhel7.tar.gz
[root@master customizing-go]# docker load -i go-rhel7.tar.gz
275be1d3d070: Loading layer [==================================================>] 166.1 MB/166.1 MB

 

[root@master customizing-go]# docker images |grep go-rhe
openshift3/go-rhel7                                               v1.0                922d51f71b5c        19 months ago       502.9 MB

[root@master customizing-go]# docker tag openshift3/go-rhel7:v1.0 workstation.lab.example.com:5000/openshift3/go-rhel7:v1.0
[root@master customizing-go]#

[root@master customizing-go]# docker push workstation.lab.example.com:5000/openshift3/go-rhel7:v1.0
The push refers to a repository [workstation.lab.example.com:5000/openshift3/go-rhel7]
5f70bf18a086: Pushed
a2cab8c9c446: Pushed
2991d89b3fef: Pushed
044672c27ac7: Pushed
485d804acd98: Pushed
44674b4c9fa0: Pushed
8d21fb116790: Pushed
abab321ee7e0: Pushed
v1.0: digest: sha256:5f786b3d3bb79d99beef780131eee3f33cb56ae16b83168689e815a13fdb933a size: 5283
[root@master customizing-go]#

[root@master customizing-go]# ls
go-is.json  go-rhel7.tar.gz  go-template.json
[root@master customizing-go]# cat go-is.json
{
“kind”: “ImageStream”,
“apiVersion”: “v1”,
“metadata”: {
“name”: “go”
},
“spec”: {
“tags”: [
{
“name”: “v1.0”,
“from”: {
“kind”: “DockerImage”,
“name”: “workstation.lab.example.com:5000/openshift3/go-rhel7:v1.0”
}
}
]
}
}
[root@master customizing-go]#

[root@master customizing-go]# oc create -f go-is.json -n openshift
imagestream “go” created
[root@master customizing-go]# oc get is -n openshift |grep ^go
go           172.30.243.29:5000/openshift/go           v1.0                      21 seconds ago
[root@master customizing-go]#

[root@master customizing-go]# oc create -f go-template.json -n openshift
template “go-s2i” created
[root@master customizing-go]#

[root@master customizing-go]# cat go-template.json
{
“kind”: “Template”,
“apiVersion”: “v1”,
“metadata”: {
“name”: “go-s2i”,
“creationTimestamp”: null
},
“objects”: [
{
“apiVersion”: “v1”,
“kind”: “Service”,
“metadata”: {
“annotations”: {
“description”: “The web server’s http port.”
},
“labels”: {
“application”: “${APPLICATION_NAME}”
},
“name”: “${APPLICATION_NAME}”
},
“spec”: {
“ports”: [
{
“port”: 8080,
“targetPort”: 8080
}
],
“selector”: {
“deploymentConfig”: “${APPLICATION_NAME}”
}
}
},
{
“apiVersion”: “v1”,
“id”: “${APPLICATION_NAME}-http-route”,
“kind”: “Route”,
“metadata”: {
“annotations”: {
“description”: “Route for application’s http service.”
},
“labels”: {
“application”: “${APPLICATION_NAME}”
},
“name”: “${APPLICATION_NAME}-http-route”
},
“spec”: {
“host”: “${APPLICATION_HOSTNAME}”,
“to”: {
“name”: “${APPLICATION_NAME}”
}
}
},
{
“apiVersion”: “v1”,
“kind”: “ImageStream”,
“metadata”: {
“labels”: {
“application”: “${APPLICATION_NAME}”
},
“name”: “${APPLICATION_NAME}”
}
},
{
“apiVersion”: “v1”,
“kind”: “BuildConfig”,
“metadata”: {
“labels”: {
“application”: “${APPLICATION_NAME}”
},
“name”: “${APPLICATION_NAME}”
},
“spec”: {
“env”: [
{
“name”: “GO_MAIN”,
“value”: “${GO_MAIN}”
},
{
“name”: “DOUGLAS”,
“value”: “${DOUGLAS}”
}
],
“output”: {
“to”: {
“name”: “${APPLICATION_NAME}”
}
},
“source”: {
“contextDir”: “${GIT_CONTEXT_DIR}”,
“git”: {
“ref”: “${GIT_REF}”,
“uri”: “${GIT_URI}”
},
“type”: “Git”
},
“strategy”: {
“sourceStrategy”: {
“from”: {
“kind”: “ImageStreamTag”,
“name”: “go:v1.0”,
“namespace”: “openshift”
}
},
“type”: “Source”
},
“triggers”: [
{
“github”: {
“secret”: “${GITHUB_TRIGGER_SECRET}”
},
“type”: “github”
},
{
“generic”: {
“secret”: “${GENERIC_TRIGGER_SECRET}”
},
“type”: “generic”
},
{
“imageChange”: {},
“type”: “imageChange”
}
]
}
},
{
“apiVersion”: “v1”,
“kind”: “DeploymentConfig”,
“metadata”: {
“labels”: {
“application”: “${APPLICATION_NAME}”
},
“name”: “${APPLICATION_NAME}”
},
“spec”: {
“replicas”: 1,
“selector”: {
“deploymentConfig”: “${APPLICATION_NAME}”
},
“strategy”: {
“type”: “Recreate”
},
“template”: {
“metadata”: {
“labels”: {
“application”: “${APPLICATION_NAME}”,
“deploymentConfig”: “${APPLICATION_NAME}”
},
“name”: “${APPLICATION_NAME}”
},
“spec”: {
“containers”: [
{
“env”: [],
“image”: “${APPLICATION_NAME}”,
“imagePullPolicy”: “Always”,
“name”: “${APPLICATION_NAME}”,
“ports”: [
{
“containerPort”: 8080
}
]
}
]
}
},
“triggers”: [
{
“imageChangeParams”: {
“automatic”: true,
“containerNames”: [
“${APPLICATION_NAME}”
],
“from”: {
“kind”: “ImageStream”,
“name”: “${APPLICATION_NAME}”
}
},
“type”: “ImageChange”
}
]
}
}
],
“parameters”: [
{
“name”: “APPLICATION_NAME”,
“description”: “Application NAME”
},
{
“name”: “GIT_URI”,
“description”: “GIT URI”
},
{
“name”: “GIT_REF”,
“description”: “Git branch”
},
{
“name”: “GO_MAIN”,
“description”: “Main GO package”
},
{
“name”: “GIT_CONTEXT_DIR”,
“description”: “Path within Git project to build; empty for root project directory.”
},
{
“name”: “GITHUB_TRIGGER_SECRET”,
“description”: “Github trigger secret”,
“generate”: “expression”,
“from”: “[a-zA-Z0-9]{8}”
},
{
“name”: “GENERIC_TRIGGER_SECRET”,
“description”: “Generic build trigger secret”,
“generate”: “expression”,
“from”: “[a-zA-Z0-9]{8}”
},
{
“name”: “APPLICATION_HOSTNAME”,
“description”: “Application hostname”
}
],
“labels”: {
“template”: “go-s2i”
}
}
[root@master customizing-go]#


student@workstation files]$ oc new-project go
Now using project “go” on server “https://master.lab.example.com:8443”.

You can add applications to this project with the ‘new-app’ command. For example, try:

oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.

[student@workstation files]$ oc get templates -n openshift |grep ^go
go-s2i                                                                                                                             8 (6 blank)       5
[student@workstation files]$

[student@workstation files]$ oc process –parameters go-s2i -n openshift
NAME                     DESCRIPTION                                                           GENERATOR           VALUE
APPLICATION_NAME         Application NAME
GIT_URI                  GIT URI
GIT_REF                  Git branch
GO_MAIN                  Main GO package
GIT_CONTEXT_DIR          Path within Git project to build; empty for root project directory.
GITHUB_TRIGGER_SECRET    Github trigger secret                                                 expression          [a-zA-Z0-9]{8}
GENERIC_TRIGGER_SECRET   Generic build trigger secret                                          expression          [a-zA-Z0-9]{8}
APPLICATION_HOSTNAME     Application hostname
[student@workstation files]$

[student@workstation files]$ oc new-app –template=go-s2i -p APPLICATION_NAME=hello -p GIT_URI=http://workstation.lab/example.com/go-hello-openshift -p APPLICATION_HOSTNAME=hellogo.cloudapps.lab.example.com
–> Deploying template “openshift/go-s2i” to project go

* With parameters:
* APPLICATION_NAME=hello
* GIT_URI=http://workstation.lab/example.com/go-hello-openshift
* GIT_REF=
* GO_MAIN=
* GIT_CONTEXT_DIR=
* GITHUB_TRIGGER_SECRET=fQGlEwxf # generated
* GENERIC_TRIGGER_SECRET=Gxws4Ygk # generated
* APPLICATION_HOSTNAME=hellogo.cloudapps.lab.example.com

–> Creating resources …
service “hello” created
route “hello-http-route” created
imagestream “hello” created
buildconfig “hello” created
deploymentconfig “hello” created
–> Success
Build scheduled, use ‘oc logs -f bc/hello’ to track its progress.
Run ‘oc status’ to view your app.
[student@workstation files]$

[student@workstation files]$ oc logs -f bc/hello
error: no builds found for “hello”
[student@workstation files]$ wtach oc get builds
bash: wtach: command not found…
Similar command is: ‘watch’
[student@workstation files]$ watch oc get builds
[student@workstation files]$ oc get pods
NAME            READY     STATUS    RESTARTS   AGE
hello-1-build   1/1       Running   0          34s
[student@workstation files]$


WEB Hook

[student@workstation ~]$ oc get builds
NAME           TYPE      FROM          STATUS     STARTED          DURATION
php-custom-1   Source    Git@7d8d481   Complete   13 minutes ago   1m6s
[student@workstation ~]$ oc get bc
NAME         TYPE      FROM      LATEST
php-custom   Source    Git       1
[student@workstation ~]$ oc describe bc
Name:        php-custom
Namespace:    php-custom
Created:    13 minutes ago
Labels:        app=php-custom
Annotations:    openshift.io/generated-by=OpenShiftNewApp
Latest Version:    1

Strategy:    Source
URL:        http://workstation.lab.example.com/php-custom
From Image:    ImageStreamTag openshift/php:5.5
Output to:    ImageStreamTag php-custom:latest

Build Run Policy:    Serial
Triggered by:        Config, ImageChange
Webhook GitHub:
    URL:    https://master.lab.example.com:8443/oapi/v1/namespaces/php-custom/buildconfigs/php-custom/webhooks/qYAVlzvUWo3Mrqzlaz5D/github
Webhook Generic:
URL:        https://master.lab.example.com:8443/oapi/v1/namespaces/php-custom/buildconfigs/php-custom/webhooks/v4AY51RNL7ozKjmgoqeE/generic
AllowEnv:    false

Build        Status        Duration    Creation Time
php-custom-1     complete     1m6s         2017-04-26 13:57:12 +0200 CEST

No events.
[student@workstation ~]$

 

Pour déclencher le BUIDL lors d’une MAJ sur le GIT

[student@workstation ~]$ curl -X POST -k https://master.lab.example.com:8443/oapi/v1/namespaces/php-custom/buildconfigs/php-custom/webhooks/qYAVlzvUWo3Mrqzlaz5D/github
{
“kind”: “Status”,
“apiVersion”: “v1”,
“metadata”: {},
“status”: “Failure”,
“message”: “non-parseable Content-Type  (mime: no media type)”,
“reason”: “BadRequest”,
“code”: 400
}[student@workstation ~]$