Monday, September 30, 2019
beauty of Spring Data
thanks to Greg Turnquist
https://www.slideshare.net/SpringCentral/introduction-to-spring-data
https://spring.io/guides/ type "data"
my code is available here https://github.com/vernetto/springdata
Labels:
Spring,
springdata
Saturday, September 21, 2019
Istio presentation by Burr Sutter
Presentation slides http://bit.ly/istio-canaries
and the code is here https://github.com/redhat-developer-demos/istio-tutorial
What I understand is that Istio provides you a central control to monitor and manage routes among services - based on iptables rather than software proxies.
plenty of labs here https://github.com/redhat-developer-demos/istio-tutorial
So Istio should completely replace Netflix Cloud stuff (hystrix, sleuth, Service Registry...) - if I understand correctly.
and the code is here https://github.com/redhat-developer-demos/istio-tutorial
What I understand is that Istio provides you a central control to monitor and manage routes among services - based on iptables rather than software proxies.
plenty of labs here https://github.com/redhat-developer-demos/istio-tutorial
So Istio should completely replace Netflix Cloud stuff (hystrix, sleuth, Service Registry...) - if I understand correctly.
Labels:
istio
Thursday, September 19, 2019
REST Management Services in weblogic reverse engineered
wls-management-services.war
only Administrator and Operator can invoke
weblogic.management.rest.Application main entry point
weblogic.management.rest.bean.utils.load.BuiltinResourceInitializer : all the MBeans are loaded here
weblogic.management.runtime.ServerRuntimeMBean
weblogic.management.rest.wls.resources.server.ShutdownServerResource this is the REST endpoint
@POST
@Produces({"application/json"})
public Response shutdownServer(@QueryParam("__detached") @DefaultValue("false") boolean detached, @QueryParam("force") @DefaultValue("false") boolean force, @PathParam("server") String name) throws Exception {
return this.getJobResponse(name, ServerOperationUtils.shutdown(this.getRequest(), name, detached, force), new ShutdownJobMessages(this));
}
weblogic.management.rest.wls.utils.ServerOperationUtils
From MBean:
com.bea.console.actions.core.server.lifecycle.Lifecycle$AdminServerShutdownJob
http://localhost:7001/console/jsp/core/server/lifecycle/ConsoleShutdown.jsp
weblogic.t3.srvr.GracefulShutdownRequest
weblogic.t3.srvr.ServerGracefulShutdownTimer
via JMX:
weblogic.management.mbeanservers.runtime.RuntimeServiceMBean extends Service : String OBJECT_NAME = "com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean"
public interface ServerRuntimeMBean extends RuntimeMBean, HealthFeedback, ServerStates, ServerRuntimeSecurityAccess
void shutdown(int var1, boolean var2, boolean var3) throws ServerLifecycleException;
https://docs.oracle.com/middleware/1221/wls/WLAPI/weblogic/management/runtime/ServerRuntimeMBean.html#shutdown_int__boolean__boolean_
For a list of REST Examples see also https://docs.oracle.com/middleware/12212/wls/WLRUR/WLRUR.pdf
----------------------------------------------------------------------
Asynchronously force shutdown a server
----------------------------------------------------------------------
curl -v \
--user operator:operator123 \
-H X-Requested-By:MyClient \
-H Accept:application/json \
-H Content-Type:application/json \
-d "{}" \
-H "Prefer:respond-async" \
-X POST http://localhost:7001/management/weblogic/latest/domainRuntime/
serverLifeCycleRuntimes/Cluster1Server2/forceShutdown
HTTP/1.1 202 Accepted
Location: http://localhost:7001/management/weblogic/latest/domainRuntime/
serverLifeCycleRuntimes/Cluster1Server2/tasks/_3_forceShutdown
only Administrator and Operator can invoke
weblogic.management.rest.Application main entry point
weblogic.management.rest.bean.utils.load.BuiltinResourceInitializer : all the MBeans are loaded here
weblogic.management.runtime.ServerRuntimeMBean
weblogic.management.rest.wls.resources.server.ShutdownServerResource this is the REST endpoint
@POST
@Produces({"application/json"})
public Response shutdownServer(@QueryParam("__detached") @DefaultValue("false") boolean detached, @QueryParam("force") @DefaultValue("false") boolean force, @PathParam("server") String name) throws Exception {
return this.getJobResponse(name, ServerOperationUtils.shutdown(this.getRequest(), name, detached, force), new ShutdownJobMessages(this));
}
weblogic.management.rest.wls.utils.ServerOperationUtils
From MBean:
com.bea.console.actions.core.server.lifecycle.Lifecycle$AdminServerShutdownJob
http://localhost:7001/console/jsp/core/server/lifecycle/ConsoleShutdown.jsp
weblogic.t3.srvr.GracefulShutdownRequest
weblogic.t3.srvr.ServerGracefulShutdownTimer
via JMX:
weblogic.management.mbeanservers.runtime.RuntimeServiceMBean extends Service : String OBJECT_NAME = "com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean"
public interface ServerRuntimeMBean extends RuntimeMBean, HealthFeedback, ServerStates, ServerRuntimeSecurityAccess
void shutdown(int var1, boolean var2, boolean var3) throws ServerLifecycleException;
https://docs.oracle.com/middleware/1221/wls/WLAPI/weblogic/management/runtime/ServerRuntimeMBean.html#shutdown_int__boolean__boolean_
For a list of REST Examples see also https://docs.oracle.com/middleware/12212/wls/WLRUR/WLRUR.pdf
----------------------------------------------------------------------
Asynchronously force shutdown a server
----------------------------------------------------------------------
curl -v \
--user operator:operator123 \
-H X-Requested-By:MyClient \
-H Accept:application/json \
-H Content-Type:application/json \
-d "{}" \
-H "Prefer:respond-async" \
-X POST http://localhost:7001/management/weblogic/latest/domainRuntime/
serverLifeCycleRuntimes/Cluster1Server2/forceShutdown
HTTP/1.1 202 Accepted
Location: http://localhost:7001/management/weblogic/latest/domainRuntime/
serverLifeCycleRuntimes/Cluster1Server2/tasks/_3_forceShutdown
Labels:
weblogic
Wednesday, September 18, 2019
Container PID 1
PRICELESS article on PID one, SIGTERM and kill in containers:
https://blog.no42.org/code/docker-java-signals-pid1/
the trick is using "exec java bla" so that java becomes PID 1.
$ kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
6) SIGABRT 7) SIGEMT 8) SIGFPE 9) SIGKILL 10) SIGBUS
11) SIGSEGV 12) SIGSYS 13) SIGPIPE 14) SIGALRM 15) SIGTERM
16) SIGURG 17) SIGSTOP 18) SIGTSTP 19) SIGCONT 20) SIGCHLD
21) SIGTTIN 22) SIGTTOU 23) SIGIO 24) SIGXCPU 25) SIGXFSZ
26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGPWR 30) SIGUSR1
31) SIGUSR2 32) SIGRTMIN 33) SIGRTMIN+1 34) SIGRTMIN+2 35) SIGRTMIN+3
36) SIGRTMIN+4 37) SIGRTMIN+5 38) SIGRTMIN+6 39) SIGRTMIN+7 40) SIGRTMIN+8
41) SIGRTMIN+9 42) SIGRTMIN+10 43) SIGRTMIN+11 44) SIGRTMIN+12 45) SIGRTMIN+13
46) SIGRTMIN+14 47) SIGRTMIN+15 48) SIGRTMIN+16 49) SIGRTMAX-15 50) SIGRTMAX-14
51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9
56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4
61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAX
"The SIGTERM signal is a generic signal used to cause program termination. Unlike SIGKILL, this signal can be blocked, handled, and ignored. It is the normal way to politely ask a program to terminate."
See also https://docs.docker.com/v17.12/engine/reference/run/#specify-an-init-process
"You can use the --init flag to indicate that an init process should be used as the PID 1 in the container. Specifying an init process ensures the usual responsibilities of an init system, such as reaping zombie processes, are performed inside the created container."
https://github.com/krallin/tini "All Tini does is spawn a single child (Tini is meant to be run in a container), and wait for it to exit all the while reaping zombies and performing signal forwarding." "Tini is included in Docker itself"
"A process running as PID 1 inside a container is treated specially by Linux: it ignores any signal with the default action. So, the process will not terminate on SIGINT or SIGTERM unless it is coded to do so."
https://blog.no42.org/code/docker-java-signals-pid1/
the trick is using "exec java bla" so that java becomes PID 1.
$ kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
6) SIGABRT 7) SIGEMT 8) SIGFPE 9) SIGKILL 10) SIGBUS
11) SIGSEGV 12) SIGSYS 13) SIGPIPE 14) SIGALRM 15) SIGTERM
16) SIGURG 17) SIGSTOP 18) SIGTSTP 19) SIGCONT 20) SIGCHLD
21) SIGTTIN 22) SIGTTOU 23) SIGIO 24) SIGXCPU 25) SIGXFSZ
26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGPWR 30) SIGUSR1
31) SIGUSR2 32) SIGRTMIN 33) SIGRTMIN+1 34) SIGRTMIN+2 35) SIGRTMIN+3
36) SIGRTMIN+4 37) SIGRTMIN+5 38) SIGRTMIN+6 39) SIGRTMIN+7 40) SIGRTMIN+8
41) SIGRTMIN+9 42) SIGRTMIN+10 43) SIGRTMIN+11 44) SIGRTMIN+12 45) SIGRTMIN+13
46) SIGRTMIN+14 47) SIGRTMIN+15 48) SIGRTMIN+16 49) SIGRTMAX-15 50) SIGRTMAX-14
51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9
56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4
61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAX
"The SIGTERM signal is a generic signal used to cause program termination. Unlike SIGKILL, this signal can be blocked, handled, and ignored. It is the normal way to politely ask a program to terminate."
See also https://docs.docker.com/v17.12/engine/reference/run/#specify-an-init-process
"You can use the --init flag to indicate that an init process should be used as the PID 1 in the container. Specifying an init process ensures the usual responsibilities of an init system, such as reaping zombie processes, are performed inside the created container."
https://github.com/krallin/tini "All Tini does is spawn a single child (Tini is meant to be run in a container), and wait for it to exit all the while reaping zombies and performing signal forwarding." "Tini is included in Docker itself"
"A process running as PID 1 inside a container is treated specially by Linux: it ignores any signal with the default action. So, the process will not terminate on SIGINT or SIGTERM unless it is coded to do so."
Tuesday, September 17, 2019
REST interface to manage WLS
this works like magic:
curl -s -v --user weblogic:weblogic0 -H X-Requested-By:MyClient -H Accept:application/json -H Content-Type:application/json -d "{timeout: 10, ignoreSessions: true }" -X POST http://localhost:7001/management/wls/latest/servers/id/AdminServer/shutdown
Problem comes when you have only HTTPS, and even worse with 2 way SSL. Then you are screwed - pardon my french - because curl stupidly uses only pem certificates, so if you have p12 you must convert the p12 into 2 separate files, certificate and private key :
openssl pkcs12 -in mycert.p12 -out file.key.pem -nocerts -nodes
openssl pkcs12 -in mycert.p12 -out file.crt.pem -clcerts -nokeys
curl -E ./file.crt.pem --key ./file.key.pem https://myservice.com/service?wsdl
CORRECTION: it seems that CURL does support now p12 certs: curl --cert-type P12 ...https://curl.haxx.se/docs/manpage.html BUT only if you use the Apple Library "Secure Support" or something like that, not if you use NSS or OpenSSL libraries (do "curl -V" to find out)
See more here https://docs.oracle.com/middleware/1221/wls/WLRUR/using.htm#WLRUR180
return all servers:
curl -s --user weblogic:weblogic0 http://localhost:7001/management/weblogic/latest/edit/servers
curl -s -v --user weblogic:weblogic0 -H X-Requested-By:MyClient -H Accept:application/json -H Content-Type:application/json -d "{timeout: 10, ignoreSessions: true }" -X POST http://localhost:7001/management/wls/latest/servers/id/AdminServer/shutdown
Problem comes when you have only HTTPS, and even worse with 2 way SSL. Then you are screwed - pardon my french - because curl stupidly uses only pem certificates, so if you have p12 you must convert the p12 into 2 separate files, certificate and private key :
openssl pkcs12 -in mycert.p12 -out file.key.pem -nocerts -nodes
openssl pkcs12 -in mycert.p12 -out file.crt.pem -clcerts -nokeys
curl -E ./file.crt.pem --key ./file.key.pem https://myservice.com/service?wsdl
CORRECTION: it seems that CURL does support now p12 certs: curl --cert-type P12 ...https://curl.haxx.se/docs/manpage.html BUT only if you use the Apple Library "Secure Support" or something like that, not if you use NSS or OpenSSL libraries (do "curl -V" to find out)
See more here https://docs.oracle.com/middleware/1221/wls/WLRUR/using.htm#WLRUR180
return all servers:
curl -s --user weblogic:weblogic0 http://localhost:7001/management/weblogic/latest/edit/servers
Labels:
weblogic
Saturday, September 14, 2019
Sockets leak, a case study
We get a "too many files open", and lsof reveals some 40k IPv6 connections.
What happens if you forget to close a Socket ?
you end up with a pile of socket file descriptors in FIN_WAIT2+CLOSE_WAIT
If instead you DO close the socket, you have only fd in TIME_WAIT
In the LEAKING case, you can observe the leaking fd:
sudo lsof -p 10779
A Java Flight Recorder measurement reveals the source of the leak:
You can view those sockets in /proc/10779/fd/ folder (use ls -ltra, they are links)
and display more info with
cat /proc/10779/net/sockstat
(see also https://www.cyberciti.biz/faq/linux-find-all-file-descriptors-used-by-a-process/ for procfs info )
What happens if you forget to close a Socket ?
import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.Socket; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.Socket; public class ConnLeak { public static void main(String[] args) throws InterruptedException, IOException { for (;;) { Thread.sleep(1000); leak(); } } static void leak() throws IOException { System.out.println("connecting "); String hostName = "localhost"; int portNumber = 8080; Socket echoSocket = new Socket(hostName, portNumber); BufferedReader in = new BufferedReader(new InputStreamReader(echoSocket.getInputStream())); System.out.println(in.readLine()); // in.close(); // WE FORGOT TO CLOSE! // echoSocket.close(); // WE FORGOT TO CLOSE! System.out.println("done"); } }
centos@localhost ~]$ netstat -an | grep WAIT tcp6 0 0 127.0.0.1:8080 127.0.0.1:50008 FIN_WAIT2 tcp6 0 0 127.0.0.1:8080 127.0.0.1:50020 FIN_WAIT2 tcp6 0 0 127.0.0.1:8080 127.0.0.1:50016 FIN_WAIT2 tcp6 0 0 127.0.0.1:50014 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:8080 127.0.0.1:50006 FIN_WAIT2 tcp6 0 0 127.0.0.1:8080 127.0.0.1:50012 FIN_WAIT2 tcp6 0 0 127.0.0.1:50020 127.0.0.1:8080 CLOSE_WAIT tcp6 578 0 ::1:8080 ::1:41240 CLOSE_WAIT tcp6 0 0 127.0.0.1:8080 127.0.0.1:50014 FIN_WAIT2 tcp6 0 0 127.0.0.1:8080 127.0.0.1:50018 FIN_WAIT2 tcp6 0 0 127.0.0.1:50022 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:50016 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:50006 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:8080 127.0.0.1:50022 FIN_WAIT2 tcp6 0 0 127.0.0.1:8080 127.0.0.1:50010 FIN_WAIT2 tcp6 0 0 127.0.0.1:50018 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:50010 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:50008 127.0.0.1:8080 CLOSE_WAIT tcp6 0 0 127.0.0.1:50012 127.0.0.1:8080 CLOSE_WAIT
you end up with a pile of socket file descriptors in FIN_WAIT2+CLOSE_WAIT
If instead you DO close the socket, you have only fd in TIME_WAIT
In the LEAKING case, you can observe the leaking fd:
sudo lsof -p 10779
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 10779 centos cwd DIR 253,0 4096 52165832 /home/centos java 10779 centos rtd DIR 253,0 243 64 / java 10779 centos txt REG 253,0 7734 33728521 /home/centos/jdk1.8.0_141/bin/java java 10779 centos mem REG 253,0 106070960 56227 /usr/lib/locale/locale-archive java 10779 centos mem REG 253,0 115814 52241403 /home/centos/jdk1.8.0_141/jre/lib/amd64/libnet.so java 10779 centos mem REG 253,0 66216625 3748173 /home/centos/jdk1.8.0_141/jre/lib/rt.jar java 10779 centos mem REG 253,0 124327 52241426 /home/centos/jdk1.8.0_141/jre/lib/amd64/libzip.so java 10779 centos mem REG 253,0 62184 56246 /usr/lib64/libnss_files-2.17.so java 10779 centos mem REG 253,0 225914 52241483 /home/centos/jdk1.8.0_141/jre/lib/amd64/libjava.so java 10779 centos mem REG 253,0 66472 52241404 /home/centos/jdk1.8.0_141/jre/lib/amd64/libverify.so java 10779 centos mem REG 253,0 44448 42091 /usr/lib64/librt-2.17.so java 10779 centos mem REG 253,0 1139680 56236 /usr/lib64/libm-2.17.so java 10779 centos mem REG 253,0 17013932 19043494 /home/centos/jdk1.8.0_141/jre/lib/amd64/server/libjvm.so java 10779 centos mem REG 253,0 2127336 42088 /usr/lib64/libc-2.17.so java 10779 centos mem REG 253,0 19776 56234 /usr/lib64/libdl-2.17.so java 10779 centos mem REG 253,0 102990 52241266 /home/centos/jdk1.8.0_141/lib/amd64/jli/libjli.so java 10779 centos mem REG 253,0 144792 56254 /usr/lib64/libpthread-2.17.so java 10779 centos mem REG 253,0 164264 34688 /usr/lib64/ld-2.17.so java 10779 centos mem REG 253,0 32768 41856127 /tmp/hsperfdata_centos/10779 java 10779 centos 0u CHR 136,1 0t0 4 /dev/pts/1 java 10779 centos 1u CHR 136,1 0t0 4 /dev/pts/1 java 10779 centos 2u CHR 136,1 0t0 4 /dev/pts/1 java 10779 centos 3r REG 253,0 66216625 3748173 /home/centos/jdk1.8.0_141/jre/lib/rt.jar java 10779 centos 4u unix 0xffff8801eaabfc00 0t0 327936 socket java 10779 centos 5u IPv6 327938 0t0 TCP localhost:50132->localhost:webcache (CLOSE_WAIT) java 10779 centos 6u IPv6 327940 0t0 TCP localhost:50134->localhost:webcache (CLOSE_WAIT) java 10779 centos 7u IPv6 327942 0t0 TCP localhost:50136->localhost:webcache (CLOSE_WAIT) java 10779 centos 8u IPv6 327146 0t0 TCP localhost:50138->localhost:webcache (CLOSE_WAIT) java 10779 centos 9u IPv6 329030 0t0 TCP localhost:50140->localhost:webcache (CLOSE_WAIT) java 10779 centos 10u IPv6 329036 0t0 TCP localhost:50142->localhost:webcache (CLOSE_WAIT) java 10779 centos 11u IPv6 326450 0t0 TCP localhost:50146->localhost:webcache (CLOSE_WAIT) java 10779 centos 12u IPv6 328073 0t0 TCP localhost:50148->localhost:webcache (CLOSE_WAIT) java 10779 centos 13u IPv6 329065 0t0 TCP localhost:50150->localhost:webcache (ESTABLISHED)
A Java Flight Recorder measurement reveals the source of the leak:
You can view those sockets in /proc/10779/fd/ folder (use ls -ltra, they are links)
and display more info with
cat /proc/10779/net/sockstat
sockets: used 881 TCP: inuse 9 orphan 0 tw 9 alloc 94 mem 75 UDP: inuse 8 mem 1 UDPLITE: inuse 0 RAW: inuse 0 FRAG: inuse 0 memory 0
(see also https://www.cyberciti.biz/faq/linux-find-all-file-descriptors-used-by-a-process/ for procfs info )
Labels:
socket
Friday, September 13, 2019
Scrum
Excellent presentation:
Plan, Build, Test, Review, Deploy -> Potentially Deliverable Product
Several Incremental releases (Sprint)
Product Owner
Scrum Master
Team
Product Backlog (User Stories = feature sets) "as user I need something so that bla"
Sprint Backlog
Burndown Chart
3 Ceremonies:
- Sprint Planning (estimate user stories sizing)
- Daily Scrum (what is completed, what they work on, what is blocking them)
- Sprint backlog (things to do in current Spring)
Sprint Review
Sprint Retrospective
Story Format: WHO... WHAT...WHY...
Story Points, FIBONACCI sequence 1 2 3 5 8 13
Minimum Viable Product = something you can demonstrate to customer
Plan, Build, Test, Review, Deploy -> Potentially Deliverable Product
Several Incremental releases (Sprint)
Product Owner
Scrum Master
Team
Product Backlog (User Stories = feature sets) "as user I need something so that bla"
Sprint Backlog
Burndown Chart
3 Ceremonies:
- Sprint Planning (estimate user stories sizing)
- Daily Scrum (what is completed, what they work on, what is blocking them)
- Sprint backlog (things to do in current Spring)
Sprint Review
Sprint Retrospective
Story Format: WHO... WHAT...WHY...
Story Points, FIBONACCI sequence 1 2 3 5 8 13
Minimum Viable Product = something you can demonstrate to customer
Labels:
scrum
Monday, September 9, 2019
podman! skopio!
sudo yum install epel-release -y
sudo yum install dnf -y
sudo dnf install -y podman
segmentation fault!
alias docker=podman
funny presentation
https://github.com/containers/conmon
sudo yum install dnf -y
sudo dnf install -y podman
segmentation fault!
alias docker=podman
funny presentation
https://github.com/containers/conmon
Sunday, September 8, 2019
docker cheat sheets
https://github.com/wsargent/docker-cheat-sheet
About "docker inspect". you can inspect a container or an image.
docker inspect $containerid
-> there are 2 places with "Image", one showing the image's sha2, the other the image's name.
If you use the sha2 to identify the image, remember it's truncated to the leftmost 12 digits.
docker image inspect $imagename (or $imagesha2)
here you find the Cmd and Entrypoint`
You can assign a label to image in Dockerfile:
LABEL app=hello-world
and use it to filter:
docker images --filter "label=app=hello-world"
since version 18.6 you can use BuildKit https://docs.docker.com/engine/reference/builder/#label :
without BuildKit:
export DOCKER_BUILDKIT=1
with BuildKit:
CMD vs ENTRYPOINT
if you use
ENTRYPOINT echo "pippo "
docker build -t echopippo .
docker run echopippo
it will print "pippo"
if you use also CMD
ENTRYPOINT echo "pippo "
CMD " peppo"
it will still print only "pippo". To append arguments, you must use the JSON array format (i.e. square brackets)
ENTRYPOINT ["echo", "pippo "]
CMD [" peppo"]
this will print "pippo peppo".
If you provide an extra parameter from command line:
docker run echopippo pluto
then CMD is ignored, and the parameter from command line is used:
pippo pluto
Priceless Romin Irani turorial on :
docker volumes: https://rominirani.com/docker-tutorial-series-part-7-data-volumes-93073a1b5b72
Dockerfile: https://rominirani.com/docker-tutorial-series-writing-a-dockerfile-ce5746617cd
Other tutorials here https://github.com/botchagalupe/DockerDo
About "docker inspect". you can inspect a container or an image.
docker inspect $containerid
-> there are 2 places with "Image", one showing the image's sha2, the other the image's name.
If you use the sha2 to identify the image, remember it's truncated to the leftmost 12 digits.
docker image inspect $imagename (or $imagesha2)
here you find the Cmd and Entrypoint`
You can assign a label to image in Dockerfile:
LABEL app=hello-world
and use it to filter:
docker images --filter "label=app=hello-world"
since version 18.6 you can use BuildKit https://docs.docker.com/engine/reference/builder/#label :
without BuildKit:
docker build -t myhw . Sending build context to Docker daemon 9.728kB Step 1/3 : FROM busybox:latest ---> db8ee88ad75f Step 2/3 : CMD echo "Hello world date=" `date` ---> Using cache ---> 22bd2fd85b95 Step 3/3 : LABEL app=hello-world ---> Running in 1350a308f4eb Removing intermediate container 1350a308f4eb ---> 7a576b758d86 Successfully built 7a576b758d86 Successfully tagged myhw:latest
export DOCKER_BUILDKIT=1
with BuildKit:
docker build -t myhw . [+] Building 2.2s (5/5) FINISHED => [internal] load .dockerignore 1.5s => => transferring context: 2B 0.0s => [internal] load build definition from Dockerfile 1.2s => => transferring dockerfile: 181B 0.0s => [internal] load metadata for docker.io/library/busybox:latest 0.0s => [1/1] FROM docker.io/library/busybox:latest 0.0s => => resolve docker.io/library/busybox:latest 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:4ce77b76b309be143c25ad75add6cdf17c282491e2966e6ee32edc40f802b1f4 0.0s => => naming to docker.io/library/myhw
CMD vs ENTRYPOINT
if you use
ENTRYPOINT echo "pippo "
docker build -t echopippo .
docker run echopippo
it will print "pippo"
if you use also CMD
ENTRYPOINT echo "pippo "
CMD " peppo"
it will still print only "pippo". To append arguments, you must use the JSON array format (i.e. square brackets)
ENTRYPOINT ["echo", "pippo "]
CMD [" peppo"]
this will print "pippo peppo".
If you provide an extra parameter from command line:
docker run echopippo pluto
then CMD is ignored, and the parameter from command line is used:
pippo pluto
Priceless Romin Irani turorial on :
docker volumes: https://rominirani.com/docker-tutorial-series-part-7-data-volumes-93073a1b5b72
Dockerfile: https://rominirani.com/docker-tutorial-series-writing-a-dockerfile-ce5746617cd
Other tutorials here https://github.com/botchagalupe/DockerDo
Labels:
cmd,
docker,
entrypoint
Thursday, September 5, 2019
running ssh workflow on group of servers
#generate sample servers
for i in {1..110}; do printf "myserver%05d\n" $i; done > myservers.txt
#print servers in given group
cat grouped.txt | grep GROUP_005 | awk -F' ' '{print $2}'
#put this in steps.sh
#execute given step for GROUP
THESTEP=3; cat grouped.txt | grep GROUP_005 | awk -vstep="$THESTEP" -F' ' '{print $2,step}' | xargs ./steps.sh
for i in {1..110}; do printf "myserver%05d\n" $i; done > myservers.txt
#group servers by 20 count=0 group=0 for line in $(cat myservers.txt); do printf "GROUP_%03d %s\n" $group $line ((count=$count + 1)) if [ $count -eq 20 ]; then ((group=$group + 1)) count=0 fi done >> grouped.txt
#print servers in given group
cat grouped.txt | grep GROUP_005 | awk -F' ' '{print $2}'
#put this in steps.sh
myserver=$1 step=$2 case $step in 1) echo "hello, this is step 1 for server $myserver" ;; 2) echo "ciao, this is step 2 for server $myserver" ;; 3) echo "Gruezi, this is step 3 for server $myserver" ;; *) echo "ERROR invalid step $step" esac
#execute given step for GROUP
THESTEP=3; cat grouped.txt | grep GROUP_005 | awk -vstep="$THESTEP" -F' ' '{print $2,step}' | xargs ./steps.sh
Sunday, September 1, 2019
saltstack getting started
curl -L https://bootstrap.saltstack.com -o install_salt.sh
sudo sh install_salt.sh -M #install master and minion on same node
/etc/salt
/etc/salt/minion
master: localhost
id: myminion
sudo systemctl restart salt-minion
sudo systemctl restart salt-master
sudo salt-key
sudo salt-key -a myminion (then type Y)
sudo salt '*' test.ping
sudo salt-call sys.doc test.ping
sudo salt '*' cmd.run_all 'echo HELLO'
#list matcher
sudo salt -L 'myminion' test.ping
sudo salt '*' grains.item os
sudo salt '*' grains.items
sudo salt --grain 'os:CentOS' test.ping
#custom grains
cat /etc/salt/grains
#list all execution modules
sudo salt '*' sys.list_modules
sudo salt '*' pkg.install htop
sudo salt '*' state.sls apache
Ref:
Colton Meyes - Learning Saltstack (Packt Publishing), really direct and hands-on book.
(PS I had a quick look at "Mastering Saltstack" by Packt, it's waaay too abstract and blablaistic.
https://docs.saltstack.com/en/latest/
sudo sh install_salt.sh -M #install master and minion on same node
/etc/salt
/etc/salt/minion
master: localhost
id: myminion
sudo systemctl restart salt-minion
sudo systemctl restart salt-master
sudo salt-key
sudo salt-key -a myminion (then type Y)
sudo salt '*' test.ping
sudo salt-call sys.doc test.ping
sudo salt '*' cmd.run_all 'echo HELLO'
#list matcher
sudo salt -L 'myminion' test.ping
sudo salt '*' grains.item os
sudo salt '*' grains.items
sudo salt --grain 'os:CentOS' test.ping
#custom grains
cat /etc/salt/grains
#list all execution modules
sudo salt '*' sys.list_modules
sudo salt '*' pkg.install htop
sudo salt '*' state.sls apache
Ref:
Colton Meyes - Learning Saltstack (Packt Publishing), really direct and hands-on book.
(PS I had a quick look at "Mastering Saltstack" by Packt, it's waaay too abstract and blablaistic.
https://docs.saltstack.com/en/latest/
Labels:
saltstack
ports and pods
#start a pod with default parameters
kubectl run nginx --image=nginx --restart=Never
kubectl describe pod nginx
Node: node01/172.17.0.36
IP: 10.32.0.2
#we can reach nginx with the "IP" address
curl 10.32.0.2:80
but "curl 172.17.0.36:80" doesn't work!
kubectl describe nodes node01
InternalIP: 172.17.0.36
10.32.0.2 is the Pod's IP (=same IP for all containers running in that Pod):
kubectl exec -ti nginx bash
hostname -i
10.32.0.2
The Node IP cannot be used as such to reach the Pod/Container.
Setting the spec.container.ports.containerPort will not change neither the IP nor the POrt at which nginx is running: this parameter is purely "declarative" and is only useful when exposing the Pod/Deployment with a Service.
If you want to "expose" to an IP other than the Pod's IP:
kubectl expose pod nginx --port=8089 --target-port=80
kubectl describe service nginx
Type: ClusterIP
IP: 10.99.136.123
Port: 8089/TCP
TargetPort: 80/TCP
Endpoints: 10.32.0.2:80
NB this IP 10.99.136.123 is NOT the Node's IP nor the Pod's IP. It's a service-specific IP.
curl 10.99.136.123:8089
kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default nginx ClusterIP 10.99.136.123 8089/TCP 16m
Ref:
https://kubernetes.io/docs/concepts/cluster-administration/networking/
kubectl run nginx --image=nginx --restart=Never
kubectl describe pod nginx
Node: node01/172.17.0.36
IP: 10.32.0.2
#we can reach nginx with the "IP" address
curl 10.32.0.2:80
but "curl 172.17.0.36:80" doesn't work!
kubectl describe nodes node01
InternalIP: 172.17.0.36
10.32.0.2 is the Pod's IP (=same IP for all containers running in that Pod):
kubectl exec -ti nginx bash
hostname -i
10.32.0.2
The Node IP cannot be used as such to reach the Pod/Container.
Setting the spec.container.ports.containerPort will not change neither the IP nor the POrt at which nginx is running: this parameter is purely "declarative" and is only useful when exposing the Pod/Deployment with a Service.
If you want to "expose" to an IP other than the Pod's IP:
kubectl expose pod nginx --port=8089 --target-port=80
kubectl describe service nginx
Type: ClusterIP
IP: 10.99.136.123
Port:
TargetPort: 80/TCP
Endpoints: 10.32.0.2:80
NB this IP 10.99.136.123 is NOT the Node's IP nor the Pod's IP. It's a service-specific IP.
curl 10.99.136.123:8089
kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default nginx ClusterIP 10.99.136.123
Ref:
https://kubernetes.io/docs/concepts/cluster-administration/networking/
Labels:
kubernetes
Subscribe to:
Posts (Atom)