Wednesday, January 31, 2018

Kubernetes interactive tutorial online

module 1

$ minikube version
minikube version: v0.17.1-katacoda
$ minikube start
Starting local Kubernetes cluster...
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl cluster-info
Kubernetes master is running at http://host01:8080
heapster is running at http://host01:8080/api/v1/namespaces/kube-system/services/heapster/proxy
kubernetes-dashboard is running at http://host01:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
monitoring-grafana is running at http://host01:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at http://host01:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl get nodes
host01 Ready 22m v1.5.2

module 2

kubectl run kubernetes-bootcamp --port=8080

kubectl get deployments

kubectl proxy

curl http://localhost:8001/version

export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{}}{{"\n"}}{{end}}')
echo Name of the Pod: $POD_NAME

curl http://localhost:8001/api/v1/proxy/namespaces/default/pods/$POD_NAME/

kubectl get pods

kubectl describe pods

kubectl logs $POD_NAME

kubectl exec $POD_NAME env

kubectl exec -ti $POD_NAME bash
curl localhost:8080

kubectl get services

kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080

kubectl get services

kubectl describe services/kubernetes-bootcamp

export NODE_PORT=$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')
curl host01:$NODE_PORT

kubectl describe deployment

kubectl get pods -l run=kubernetes-bootcamp

export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{}}{{"\n"}}{{end}}')
echo Name of the Pod: $POD_NAME

kubectl label pod $POD_NAME app=v1

kubectl describe pods $POD_NAME

kubectl get pods -l app=v1

kubectl delete service -l run=kubernetes-bootcamp
kubectl get services
curl host01:$NODE_PORT
kubectl exec -ti $POD_NAME curl localhost:8080

module 4

kubectl get deployments
kubectl scale deployments/kubernetes-bootcamp --replicas=4
kubectl get deployments

kubectl get pods -o wide

kubectl describe deployments/kubernetes-bootcamp

export NODE_PORT=$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')

curl host01:$NODE_PORT

kubectl scale deployments/kubernetes-bootcamp --replicas=2
kubectl get deployments
kubectl get pods -o wide

module 6

kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=jocatalin/kubernetes-bootcamp:v2

kubectl get pods
kubectl describe services/kubernetes-bootcamp
export NODE_PORT=$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')
curl host01:$NODE_PORT

kubectl rollout status deployments/kubernetes-bootcamp

kubectl describe pods

kubectl rollout status deployments/kubernetes-bootcamp

kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=jocatalin/kubernetes-bootcamp:v10
kubectl rollout undo deployments/kubernetes-bootcamp

android tablet as extra monitor

I am using SPACEDESK as an extra monitor for my Fire Tablet. It's free and it works seamlessly as it was an extra monitor. I have configured it as 3rd monitor, because I already had one. You can position it with Windows 10 "display settings" as any other monitor.

Once in a while it crashes, but not so often....


debug maven settings

mvn archetype:generate -DarchetypeArtifactId=wildfly-javaee7-webapp-archetype -DarchetypeGroupId=org.wildfly.archetype -DarchetypeVersion=8.2.0.Final

(create a nexustests artifactId)

cd nexustests/
mvn -X help:effective-settings

Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T09:58:13+02:00)
Maven home: /home/centos/apache-maven-3.5.2
Java version: 1.8.0_161, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-1.8.0-openjdk-
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-693.11.6.el7.x86_64", arch: "amd64", family: "unix"

[DEBUG] Reading global settings from /home/centos/apache-maven-3.5.2/conf/settings.xml
[DEBUG] Reading user settings from /home/centos/.m2/settings.xml
[DEBUG] Reading global toolchains from /home/centos/apache-maven-3.5.2/conf/toolchains.xml
[DEBUG] Reading user toolchains from /home/centos/.m2/toolchains.xml
[DEBUG] Using local repository at /home/centos/.m2/repository

[DEBUG] Dependency collection stats: {ConflictMarker.analyzeTime=1779672, ConflictMarker.markTime=809680, ConflictMarker.nodeCount=135, ConflictIdSorter.graphTime=1026621, ConflictIdSorter.topsortTime=610945, ConflictIdSorter.conflictIdCount=39, ConflictIdSorter.conflictIdCycleCount=0, ConflictResolver.totalTime=8120595, ConflictResolver.conflictItemCount=95, DefaultDependencyCollector.collectTime=4585627924, DefaultDependencyCollector.transformTime=15642922}
[DEBUG] org.apache.maven.plugins:maven-help-plugin:jar:2.2:
[DEBUG]    org.apache.maven:maven-artifact:jar:2.2.1:compile
[DEBUG]    org.apache.maven:maven-core:jar:2.2.1:compile
[DEBUG]       org.slf4j:slf4j-jdk14:jar:1.5.6:runtime
[DEBUG]          org.slf4j:slf4j-api:jar:1.5.6:runtime
[DEBUG]       org.slf4j:jcl-over-slf4j:jar:1.5.6:runtime
[DEBUG]       org.apache.maven.reporting:maven-reporting-api:jar:2.2.1:compile
[DEBUG]          org.apache.maven.doxia:doxia-sink-api:jar:1.1:compile
[DEBUG]          org.apache.maven.doxia:doxia-logging-api:jar:1.1:compile
[DEBUG]       org.apache.maven:maven-repository-metadata:jar:2.2.1:compile
[DEBUG]       org.apache.maven:maven-error-diagnostics:jar:2.2.1:compile
[DEBUG]       commons-cli:commons-cli:jar:1.2:compile
[DEBUG]       org.apache.maven:maven-artifact-manager:jar:2.2.1:compile
[DEBUG]          backport-util-concurrent:backport-util-concurrent:jar:3.1:compile
[DEBUG]       classworlds:classworlds:jar:1.1:compile
[DEBUG]       org.sonatype.plexus:plexus-sec-dispatcher:jar:1.3:compile
[DEBUG]          org.sonatype.plexus:plexus-cipher:jar:1.4:compile
[DEBUG]    org.apache.maven:maven-model:jar:2.2.1:compile
[DEBUG]    org.apache.maven:maven-plugin-api:jar:2.2.1:compile
[DEBUG]    org.apache.maven:maven-plugin-descriptor:jar:2.2.1:compile
[DEBUG]    org.apache.maven:maven-project:jar:2.2.1:compile
[DEBUG]       org.apache.maven:maven-plugin-registry:jar:2.2.1:compile
[DEBUG]       org.codehaus.plexus:plexus-interpolation:jar:1.11:compile
[DEBUG]    org.apache.maven:maven-settings:jar:2.2.1:compile
[DEBUG]    org.apache.maven:maven-profile:jar:2.2.1:compile
[DEBUG]    org.apache.maven:maven-monitor:jar:2.2.1:compile
[DEBUG]    org.apache.maven:maven-plugin-parameter-documenter:jar:2.2.1:compile
[DEBUG]    org.apache.maven.plugin-tools:maven-plugin-tools-api:jar:2.4.3:compile
[DEBUG]       jtidy:jtidy:jar:4aug2000r7-dev:compile
[DEBUG]    org.codehaus.plexus:plexus-container-default:jar:1.0-alpha-9:compile
[DEBUG]       junit:junit:jar:3.8.1:compile
[DEBUG]    org.codehaus.plexus:plexus-interactivity-api:jar:1.0-alpha-4:compile
[DEBUG]    org.codehaus.plexus:plexus-utils:jar:1.5.7:compile
[DEBUG]    jdom:jdom:jar:1.0:compile
[DEBUG]    com.thoughtworks.xstream:xstream:jar:1.4.3:compile
[DEBUG]       xmlpull:xmlpull:jar:
[DEBUG]       xpp3:xpp3_min:jar:1.1.4c:compile
[DEBUG]    commons-lang:commons-lang:jar:2.4:compile

and a whole lot of other information that I have removed....

Awesome tool to troubleshoot STINKY Maven

docker monitoring CPU and memory usage

you can run "top" on the host machine, or even better use "docker stats":

docker stats

CONTAINER           CPU %               MEM USAGE / LIMIT       MEM %               NET I/O               BLOCK I/O             PIDS
481ab8c0e8c3        0.00%               1.152 MiB / 15.51 GiB   0.01%               57.44 kB / 34.53 kB   10.06 MB / 4.096 kB   3
335eac0388c6        0.10%               575.2 MiB / 15.51 GiB   3.62%               37.72 kB / 1.296 kB   98.64 MB / 172 kB     45
57adda2fc2ca        0.08%               851.3 MiB / 15.51 GiB   5.36%               6.311 MB / 1.029 MB   152.3 MB / 258 kB     91
58d783dd5b7e        0.09%               849.8 MiB / 15.51 GiB   5.35%               9.961 MB / 2.459 MB   157.1 MB / 266.2 kB   88
f935787e6800        0.10%               2.03 GiB / 15.51 GiB    13.09%              227.9 MB / 36.82 MB   357.7 MB / 311.3 kB   114
a6996fee63aa        184.91%             653.8 MiB / 15.51 GiB   4.12%               876 B / 1.296 kB      100.8 MB / 0 B        92

"docker top" seems to be a misnomer, it's more similar to "ps"

docker top 481ab8c0e8c3
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                21277               21262               0                   Jan30               ?                   00:00:00            /bin/bash /opt/jboss/wildfly/bin/ -b -bmanagement
root                21309               21277               0                   Jan30               ?                   00:00:00            /bin/bash /usr/local/bin/ssh-start
root                21312               21309               0                   Jan30               ?                   00:00:00            /usr/sbin/sshd -D

It's good to set LIMITS on the MEMORY and CPU a container can use:

docker run -it --cpus=".5" ubuntu /bin/bash

Katacoda tutorials about docker, kubernetes, git...

these are really amazing tutorials, hands-on and brilliantly explained. MUCH better than any book I have read so far.

Sunday, January 28, 2018

play with kubernetes

login with your github account

create new instance

kubeadm init --apiserver-advertise-address $(hostname -i)

kubectl apply -n kube-system -f "$(kubectl version | base64 |tr -d '\n')"

kubectl apply -f
which contains this:
apiVersion: v1
kind: Service
  name: my-nginx-svc
    app: nginx
  type: LoadBalancer
  - port: 80
    app: nginx
apiVersion: apps/v1beta1
kind: Deployment
  name: my-nginx
  replicas: 3
        app: nginx
      - name: nginx
        image: nginx:1.7.9
        - containerPort: 80

kubectl describe deployment my-nginx
kubectl get pods -l app=nginx

How to expose the service to a public IP ? No clue!

kubectl run hello-world --replicas=5 --labels="run=load-balancer-example" --port=8080

kubectl get deployments hello-world
kubectl describe deployments hello-world

kubectl get replicasets
kubectl describe replicasets
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service

kubectl get services my-service

NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
my-service   LoadBalancer   <pending>     8080:31900/TCP   50s

pending means "wait".... but it seems that in the playground you will NEVER get an external IP!

kubectl describe services my-service

kubectl get pods --output=wide

in fact, all pods are also in "pending" state

Appendix 1: logs

 You can bootstrap a cluster as follows:

 1. Initializes cluster master node:

 kubeadm init --apiserver-advertise-address $(hostname -i)

 2. Initialize cluster networking:

 kubectl apply -n kube-system -f \
    "$(kubectl version | base64 |tr -d '\n')"

 3. (Optional) Create an nginx deployment:

 kubectl apply -f

                          The PWK team.

[node1 /]$ kubeadm init --apiserver-advertise-address $(hostname -i)
Initializing machine ID from random generator.
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.7
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs []
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 31.002238 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node node1 as master by adding a label and a taint
[markmaster] Master node1 tainted and labelled with key/value:""
[bootstraptoken] Using token: f7996a.e54fe4f219d3e1d8
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token f7996a.e54fe4f219d3e1d8 --discovery-token-ca-cert-hash sha256:f58fcfb9e0a2adc69f06988e2c0499ab003458a6102bb7b73ffcf115f8882acb

Waiting for api server to startup..........
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset "kube-proxy" configured
No resources found
[node1 /]$
[node1 /]$ kubectl apply -n kube-system -f \
>     "$(kubectl version | base64 |tr -d '\n')"
serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
role "weave-net" created
rolebinding "weave-net" created
daemonset "weave-net" created

[node1 /]$ kubectl apply -f
service "my-nginx-svc" created
deployment "my-nginx" created

[node1 /]$ kubectl describe deployment my-nginx
Name:                   my-nginx
Namespace:              default
CreationTimestamp:      Sun, 28 Jan 2018 11:56:23 +0000
Labels:                 app=nginx
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 3 total | 0 available | 3 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
    Image:        nginx:1.7.9
    Port:         80/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  <none>
NewReplicaSet:   my-nginx-569477d6d8 (3/3 replicas created)
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  2m    deployment-controller  Scaled up replica set my-nginx-569477d6d8 to 3

gcloud on CentOS7

for Windows you can install like this

in Linux it's much simpler:
cd /home/centos/
curl | bash

this will install into /home/centos/google-cloud-sdk/

As region, I chose 8 (us-central1)

in the browser, grant access to your account

gcloud init
gcloud docker
gcloud --version
gcloud compute project-info describe

gcloud info
oogle Cloud SDK [186.0.0]

Platform: [Linux, x86_64] ('Linux', 'localhost.localdomain', '3.10.0-693.17.1.el7.x86_64', '#1 SMP Thu Jan 25 20:13:58 UTC 2018', 'x86_64', 'x86_64')
Python Version: [2.7.5 (default, Aug  4 2017, 00:39:18)  [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]]
Python Location: [/usr/bin/python2]
Site Packages: [Disabled]

Installation Root: [/home/centos/google-cloud-sdk]
Installed Components:
  core: [2018.01.22]
  gsutil: [4.28]
  bq: [2.0.28]
System PATH: [/home/centos/google-cloud-sdk/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/centos/.local/bin:/home/centos/bin:/home/centos/apache-maven-3.5.0/bin/:/home/centos/jdk1.8.0_141/bin/]
Python PATH: [/home/centos/google-cloud-sdk/lib/third_party:/home/centos/google-cloud-sdk/lib:/usr/lib64/]
Cloud SDK on PATH: [True]
Kubectl on PATH: [/usr/local/sbin/kubectl]

Installation Properties: [/home/centos/google-cloud-sdk/properties]
User Config Directory: [/home/centos/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/home/centos/.config/gcloud/configurations/config_default]

Account: []
Project: [pippo-189911]

Current Properties:
    project: [pippo-189911]
    account: []
    disable_usage_reporting: [False]
    region: [us-central1]
    zone: [us-central1-a]

Logs Directory: [/home/centos/.config/gcloud/logs]
Last Log File: [/home/centos/.config/gcloud/logs/2018.01.28/]

git: [git version]
ssh: [OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017]

gcloud config set compute/zone us-central1-a
gcloud config set compute/region us-central1

gcloud compute instances create my-instance

gcloud compute instances list
NAME                                      ZONE           MACHINE_TYPE               PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
gke-cluster-1-default-pool-6e000aa7-489w  us-central1-a  custom (1 vCPU, 2.00 GiB)        RUNNING
gke-cluster-1-default-pool-6e000aa7-hrqp  us-central1-a  custom (1 vCPU, 2.00 GiB)       RUNNING
gke-cluster-1-default-pool-6e000aa7-s43m  us-central1-a  custom (1 vCPU, 2.00 GiB)         RUNNING

you can add --format yaml or --format json but it's a lot more verbose

gcloud container clusters upgrade cluster-1 --image-type cos --cluster-version 1.8.6-gke.0

Failed to start node upgrade: Desired node version (1.8.6-gke.0) cannot be greater than current master version (1.7.12-gke.0)

gcloud compute instances describe my-instance --zone us-central1-a

gcloud compute ssh my-instance --zone us-central1-a
gcloud compute scp ~/file-1 my-instance:~/remote-destination --zone us-central1-a

Friday, January 26, 2018

Nexus automation API

The Nexus documentation and examples are horribly fragmentary and scattered - I have rarely seen a such popular product being under-documented in this chaotic way.

Since 3.3 there is a Swagger-UI interface not available in 2.14 . Try for instance http://nexus-nodejs/swagger-ui/#!/assets/getAssets (replace nexus-nodejs with your own URL)

A quite explanatory article on REST API (including the /nexus/service/local/ vs /nexus/service/siesta story) is

One can always generate a Java Client from Swagger or also

Apparently Nexus chose Groovy as a language for automation

REST syntax

To discover the syntax of the REST calls (with JSON payload) Nexus recommends simply ("request espionage") to use Developer Tools (F12 in IE, CTRL-SHIFT-I in Chrome and Firefox), go to Network tab, start capturing and execute manually some commands in the Administration UI.

This for instance is how to "whitelist" (pre-approve) use of a xml-apis:xml-apis:1.3.04 (GAV) component in the provisioned repository "approved_from_central" :

Request POST /service/local/procurement/resolutions/approved_from_central HTTP/1.1
X-Nexus-UI true
Accept application/json,application/vnd.siesta-error-v1+json,application/vnd.siesta-validation-errors-v1+json
Content-Type application/json
X-Requested-With XMLHttpRequest
Referer http://nexusserver/
Accept-Language de-CH
Accept-Encoding gzip, deflate
User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; Ypkch32; rv:11.0) like Gecko
Host nexusserver
Content-Length 170
Connection Keep-Alive
Cache-Control no-cache
Cookie NXSESSIONID=e2265c85-f82b-45ce-a889-f6964cdaa214

Request Body 

Return code is HTTP 201 (created)

In 2.14 you can get the whole list of APIs by logging in as admin, on the left menu click on Administration, Plugin Console, click on "Nexus Core API (Restlet 1.x Plugin)" (Provides Nexus Core REST API) and click on the Documentation link http://nexusserver/nexus-restlet1x-plugin/default/docs/index.html

This video seems the only serious attempt to document the REST API

Other useful commands:

#get the status of repository (the -u option is not necessary!)
curl -u admin:admin123 http://nexusserver/service/local/status

curl -X GET -u admin:admin123 http://nexusserver/service/local/users

#get list of all repositories, in xml format
curl http://nexusserver/service/local/all_repositories

#get list of all assets with "arquillian" in the name
curl http://nexusserver/service/local/data_index?q=arquillian

curl -i -H "Accept: application/xml" -H "Content-Type: application/xml" -X POST -v -trace-ascii -d "@repository-definition.xml" -u admin:admin123 http://nexusserver/service/local/repositories

See the curl documentation here

There are several github repositories:

git clone
cd nexus-book-examples/
git branch -a
* master
remotes/origin/HEAD -> origin/master
git checkout -b nexus-3.x origin/nexus-3.x

then open this in your firefox:

The source code is available in under
plugins/nexus-script-plugin :

To parse XML in Python:

To make curl requests (POST, GET) in Python:

Example with Nexus PRO 2.14

cat allowartifact.json

curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X POST -v -trace-ascii -d "@allowartifact.json" -u admin:admin123 http://nexusserver/service/local/procurement/resolutions/approved_from_central

* About to connect() to nexusserver port 80 (#0)
*   Trying connected
* Connected to nexusserver ( port 80 (#0)
* Server auth using Basic with user 'admin'
> POST /service/local/procurement/resolutions/approved_from_central HTTP/1.1
> Authorization: Basic YWRteee6cG9qqqV4dXM=
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: nexusserver
> Accept: application/json
> Content-Type: application/json
> Content-Length: 170
< HTTP/1.1 201 Created
HTTP/1.1 201 Created
< Date: Fri, 26 Jan 2018 13:15:17 GMT
Date: Fri, 26 Jan 2018 13:15:17 GMT
< Server: Nexus/2.14.5-02 Noelios-Restlet-Engine/1.1.6-SONATYPE-5348-V8
Server: Nexus/2.14.5-02 Noelios-Restlet-Engine/1.1.6-SONATYPE-5348-V8
< X-Frame-Options: SAMEORIGIN
X-Frame-Options: SAMEORIGIN
< X-Content-Type-Options: nosniff
X-Content-Type-Options: nosniff
< Content-Type: application/json; charset=UTF-8
Content-Type: application/json; charset=UTF-8
< Content-Length: 191
Content-Length: 191
< Cache-Control: no-store, no-cache, must-revalidate, max-age=1, private, proxy-revalidate
Cache-Control: no-store, no-cache, must-revalidate, max-age=1, private, proxy-revalidate
< Expires: Fri, 26 Jan 2018 13:15:17 GMT
Expires: Fri, 26 Jan 2018 13:15:17 GMT
< Pragma: no-cache
Pragma: no-cache
< Connection: close
Connection: close

* Closing connection #0

and this inserts into nexusserver/conf/procurement.xml :


sample CURL calls

#add -H "Accept: application/json" to get in JSON format, otherwise it defaults to XML
#get pom of an artifact
curl "http://nexus-java/service/local/artifact/maven?g=xml-apis&a=xml-apis&v=1.3.04&r=central"
#get all repos
curl http://nexus-java/service/local/all_repositories
#get all components of a given group
curl -H "Accept: application/xml" http://nexus-java/service/local/lucene/search?g=xml-apis

See the documentation of the lucene indexer

Friday, January 19, 2018

docker weblogic

docker pull ismaleiva90/weblogic12
docker run -d --name myweblogic -p 49163:7001 -p 49164:7002 -p 49165:5556 ismaleiva90/weblogic12:latest
User: weblogic
Pass: welcome1

exit ( go back to host )

generate a basic WAR:

mvn archetype:generate -DgroupId=com.mkyong -DartifactId=pippoWebApp -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

docker cp ./pippoWebApp/target/pippoWebApp.war myweblogic:/u01/oracle/weblogic/user_projects/domains/base_domain

then you have to manually deploy (copying to autodeploy doesn't work, this domain has PRODUCTION mode enabled)


See also

To get the image from Oracle itself:

git clone
cd docker-images/OracleWebLogic/dockerfiles

first you should download the JAr files and scp them here (this really sucks.... why not just curl them in place from a repository...)

./ -v -d -c

Thursday, January 18, 2018

R support in Sonatype Nexus 3.7

What is R ? but even better

What is CRAN ? and

"A core set of packages is included with the installation of R, with more than 11,000 additional packages (as of July 2017) available at the Comprehensive R Archive Network (CRAN), Bioconductor, Omegahat, GitHub, and other repositories."

git clone
cd nexus-repository-r/
git tag -l
git checkout tags/1.0.1
(you are now in detached head, who cares...)
or even better : git checkout tags/1.0.1 -b 1.0.1 (so you don't get a detached head)
WARNING: for Nexus version 3.8.0-02 and above use tag 1.0.1, otherwise 1.0.0
mvn clean install

ls target/nexus-repository-r-1.0.0.jar

docker run -d -p 8081:8081 --name nexus sonatype/nexus3
docker cp nexus-repository-r-1.0.0.jar 6f49bdc956a6:/opt/sonatype/nexus/system/org/sonatype/nexus/plugins/

docker exec -u 0 -ti 6f49bdc956a6 /bin/bash
stty rows 50 cols 132
cd /opt/sonatype/nexus/system/org/sonatype/nexus/plugins/
mkdir -p nexus-repository-r/1.0.0/
mv nexus-repository-r-1.0.0.jar nexus-repository-r/1.0.0/

vi /opt/sonatype/nexus/system/com/sonatype/nexus/assemblies/nexus-oss-feature/3.7.1-02/nexus-oss-feature-3.7.1-02-features.xml

customize the file as per chapter "(most) Permanent Install" (careful! you must add 2 separate sections in the XML file! Nexus stinks, they should provide a CLI to manage plugins, instead of asking you to manually manipulate the XML )

docker stop 6f49bdc956a6
docker start 6f49bdc956a6

when you create a new repository, you should now see the r (group) r (hosted) and r (proxy) types

Tuesday, January 16, 2018

the Docker book by James Turnbull

The books - although a bit outdated - is still excellent, very practical, hands-on and detailed.

dial tcp lookup connection refused

I was trying to

docker run -d -p 8081:8081 --name nexus sonatype/nexus3

and I got the scary error "dial tcp lookup connection refused"

googling around, they suggest to "reset the docker settings to factory defaults." .
Everybody keeps repeating this mantra "reset to factory defaults" but nobody explains how to do it on Linux. I hate it. So I could not figure out what to do. iptables -F didn't work.

Someone mentions to add to the /etc/hosts file because the DNS playing tricks and truncate.. scary....

Eventually I rebooted the machine and it all works again. I only wish that Docker Inc stopped playing with docker and kept it stable. I hate them. They are destroying a great product.

Thursday, January 11, 2018

Itinerary to visit Italy

See also my

Everybody goes to Venice, Florence, Pompei and Rome, so I will not cover them. But there are many more astounding places in Italy.

Disclaimer: I am an archeology bug, so I like mostly ancient ruins. The most amazing places I have seen in Italy are:

Paestum - don't miss the museum with amazing frescoes

Agrigento Temple Valley , greek stuff, awesome

Cerveteri absolutely amazing Etruscan cemetery


Napoli and the islands (Ischia, Capri) and the Vesuvio volcano - beware, Napoli is a bit rough, watch your wallet, same thing in Roma.

Ostia Antica

Bologna with his medieval center and the Towers (no VERY ancient stuff here)

Ravenna the Italian Bisantium

Ferrara rich of middleage history

Verona with its Arena (roman) and medieval center

Siena and its middle age palaces

Assisi magic atmosphere with white houses and churches

Palermo and its main old palaces

I am aware of the beauty of Lucca, Lecce but I was never there.

Wednesday, January 10, 2018

Encrypting stuff with Openssl

Let's first encode base64 the password:

echo -n "pippo" | openssl enc -base64

man echo says: "-n do not output the trailing newline"

This is equivalent to

echo -n "pippo" | base64

You can then decode with "base64 -decode" :

echo -n "pippo" | base64 | base64 --decode

or with "openssl enc -base64 -d":

echo -n "pippo" | openssl enc -base64 | openssl enc -base64 -d

If you want to encrypt with a salt:

openssl aes-256-cbc -in mypw.txt -out mypwenc.txt -e -pass pass:pluto

and to decrypt:

openssl aes-256-cbc -in mypwenc.txt -out mypwclear.txt -d -pass pass:pluto

Details on the openssl command options are here

You can always check here

Jenkins blueocean plugin with Docker and here the Docker installation instructions

Here all the Docker images

Then I run this:

( I am using Docker 1.12 )

docker run -u root -d -p 8080:8080 -p 50000:50000 -v jenkins-data:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean

port 50000 is needed only if you plan to use slave build agents

(I had to remove --rm otherwise I get Conflicting options: --rm and -d )

docker ps gives you the containerid -> docker exec -ti 03e820715e74 bash

cat /var/jenkins_home/secrets/initialAdminPassword
copy and paste in localhost:8080 to unlock Jenkins

Saturday, January 6, 2018

Google maps Java API

The Google MAPS main page is here

Entry point for Java API is:

The GIT repo is

Nice presentation, but tutorial link is broken

Essential: KML specifications

Witches' Sabbath with yum install and docker

After a major CentOS update, when I "oc cluster up" I get this:

Error: Minor number must not contain leading zeroes "01"

the usual "docker version mismatch" issue (go to hell OpenShift)

So I "yum install docker" and I get this:

Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base:
* epel:
* extras:
* ius:
* updates:
Resolving Dependencies
--> Running transaction check
---> Package docker.x86_64 2:1.12.6-68.gitec8512b.el7.centos will be installed
--> Processing Dependency: docker-common = 2:1.12.6-68.gitec8512b.el7.centos for package: 2:docker-1.12.6-68.gitec8512b.el7.centos.x86_64
--> Processing Dependency: docker-client = 2:1.12.6-68.gitec8512b.el7.centos for package: 2:docker-1.12.6-68.gitec8512b.el7.centos.x86_64
--> Running transaction check
---> Package docker-client.x86_64 2:1.12.6-68.gitec8512b.el7.centos will be installed
---> Package docker-common.x86_64 2:1.12.6-68.gitec8512b.el7.centos will be installed
--> Processing Conflict: docker-ce-18.01.0.ce-0.1.rc1.el7.centos.x86_64 conflicts docker
--> Processing Conflict: docker-ce-18.01.0.ce-0.1.rc1.el7.centos.x86_64 conflicts docker-io
--> Finished Dependency Resolution
Error: docker-ce conflicts with 2:docker-1.12.6-68.gitec8512b.el7.centos.x86_64
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

I try "yum install docker --skip-broken" but I still get

Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base:
* epel:
* extras:
* ius:
* updates:
Resolving Dependencies
--> Running transaction check
---> Package docker.x86_64 2:1.12.6-68.gitec8512b.el7.centos will be installed
--> Processing Dependency: docker-common = 2:1.12.6-68.gitec8512b.el7.centos for package: 2:docker-1.12.6-68.gitec8512b.el7.centos.x86_64
--> Processing Dependency: docker-client = 2:1.12.6-68.gitec8512b.el7.centos for package: 2:docker-1.12.6-68.gitec8512b.el7.centos.x86_64
--> Running transaction check
---> Package docker-client.x86_64 2:1.12.6-68.gitec8512b.el7.centos will be installed
---> Package docker-common.x86_64 2:1.12.6-68.gitec8512b.el7.centos will be installed
--> Processing Conflict: docker-ce-18.01.0.ce-0.1.rc1.el7.centos.x86_64 conflicts docker
--> Processing Conflict: docker-ce-18.01.0.ce-0.1.rc1.el7.centos.x86_64 conflicts docker-io
extras/7/x86_64/filelists_db | 528 kB 00:00:00

Packages skipped because of dependency problems:
2:docker-1.12.6-68.gitec8512b.el7.centos.x86_64 from extras
2:docker-client-1.12.6-68.gitec8512b.el7.centos.x86_64 from extras
2:docker-common-1.12.6-68.gitec8512b.el7.centos.x86_64 from extras

Then I do yum list installed | grep docker

docker-ce.x86_64 18.01.0.ce-0.1.rc1.el7.centos @docker-ce-test

then "yum remove docker-ce.x86_64" and "yum install docker"

docker.x86_64 2:1.12.6-68.gitec8512b.el7.centos

Dependency Installed:
docker-client.x86_64 2:1.12.6-68.gitec8512b.el7.centos docker-common.x86_64 2:1.12.6-68.gitec8512b.el7.centos

and everything works again.

I used to love Docker, but since it was split in CE and EE they messed really hard. Too much greed for money and need to place expensive consultants.

List of useful yum commands:

yum install bla
yum -y install bla
yum remove bla
yum update bla
yum search bla
yum info bla
yum list
yum list installed
#this to find which package a file belongs to
yum provides filename

yum grouplist
yum groupinstall 'bla'
yum groupupdate 'bla'
yum groupremove 'bla'

yum repolist
yum repolist all
yum --enablerepo=fedora-source install vim-X11.x86_64

Friday, January 5, 2018

CentOS install JDK

which java
java -version
ls -ltr /usr/bin/java
ls -ltr /etc/alternatives/java

sudo yum update

yum list installed | grep "java"

sudo yum install java-1.8.0-openjdk-devel

cd /usr/lib/jvm/java-1.8.0-openjdk-

sudo ln -sf /usr/lib/jvm/java-1.8.0-openjdk- /etc/alternatives/java

Gnome hotcorner: most annoying unwanted feature ever

I have moved away from Windows because I was really fed up with Microsoft. Installed Centos with Gnome. To discover that Gnome is even worse!

If by mistake you put the mouse on the screen border, all your open windows stack up and you are forced to click back on a thumbnail to resume work.

And apparently, after 20 minutes googling, there is no easy way to disable this unasked-for, unwanted trollish feature.

Even in Applications/Utilities/Tweak tools I could not find an option to disable this stupid crap.

YAGNI: you will never need this stupid trick, then why enabling it by default? I simply hate UIs overloaded with hidden features..... KEEP IT SIMPLE !

Tuesday, January 2, 2018

Linux misc (iptables, dig, rpm)

echo $LANG

CTRL-SHIFT-U allows you to enter the 4 letters for a Unicode character (eg 03bb is greek lambda)


System Locale: LANG=en_US.UTF-8
VC Keymap: it
X11 Layout: it,us
X11 Variant: ,

to change locale:
localectl set-locale LANG=fr_FR.utf8

cat /etc/locale.conf

list all available language packs:
yum langavailable

list installed language packs:
yum langlist

iptables tutorial instructions:

sudo iptables -L -v
sudo iptables -F
sudo cat /etc/sysconfig/iptables

systemctl status firewalld
systemctl status NetworkManager

history: remove line numbers by adding to .bash_profile
HISTTIMEFORMAT="$(echo -e '\r\e[K')"

#rpm list all packages
rpm -qa | grep ansible
#this installs 1.1 without removing 1.0 if exists
rpm -ivh some-package-1.1
#this updates (if existing) 1.0 to 1.1
rpm -Uvh some-package-1.1