Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in GCP by (19.7k points)
recategorized by

I am evaluating Kubernetes as a platform for our new application. For now, it looks all very exciting! However, I’m running into a problem: I’m hosting my cluster on GCE and I need some mechanism to share storage between two pods - the continuous integration server and my application server. What’s the best way of doing this with Kubernetes? None of the volume types seems to fit my needs since GCE disks can’t be shared if one pod needs to write to the disk. NFS would be perfect, but seems to require special build options for the Kubernetes cluster?

EDIT: Sharing storage seems to be a problem that I have encountered multiple times now using Kubernetes. There are multiple use cases where I'd just like to have one volume and hook it up to multiple pods (with write access). I can only assume that this would be a common use case, no?

EDIT2: For example, this page describes how to set up an Elasticsearch cluster, but wiring it up with persistent storage is impossible (as described here), which kind of renders it pointless :(

closed

1 Answer

+1 vote
by (62.9k points)
selected by
 
Best answer

Here I will tell you how to create a Pod that runs two Containers. The two containers share a Volume that they can use to communicate. Here is the configuration file for the Pod:

apiVersion: v1

kind: Pod

metadata:

  name: two-containers

spec:

  restartPolicy: Never

  volumes:

  - name: shared-data

    emptyDir: {}

  containers:

  - name: nginx-container

    image: nginx

    volumeMounts:

    - name: shared-data

      mountPath: /usr/share/nginx/html

  - name: debian-container

    image: debian

    volumeMounts:

    - name: shared-data

      mountPath: /pod-data

    command: ["/bin/sh"]

    args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]

In the configuration file, you can see that the Pod has a Volume named shared-data.

The first container listed in the configuration file runs an Nginx server. The mount path for the shared Volume is /usr/share/nginx/html. The second container is based on the Debian image and has a mount path of /pod-data.  The second container runs the following command and then terminates.

echo Hello from the debian container > /pod-data/index.html

Notice that the second container writes the index.html  file in the root directory of the Nginx server.

Create the Pod and the two Containers:

kubectl apply -f https://k8s.io/examples/pods/two-container-pod.yaml

View information about the Pod and the Containers:

kubectl get pod two-containers --output=yaml 

Here is a portion of the output:

apiVersion: v1

kind: Pod

metadata:

  ...

  name: two-containers

  namespace: default

  ...

spec:

  ...

  containerStatuses:

  - containerID: docker://c1d8abd1 ...

    image: debian

    ...

    lastState:

      terminated:

        ...

    name: debian-container

    ...

  - containerID: docker://96c1ff2c5bb ...

    image: nginx

    ...

    name: nginx-container

    ...

    state:

      running:

    ... 

 You can see that the Debian Container has terminated, and the Nginx Container is still running.

Get a shell to Nginx Container:

kubectl exec -it two-containers -c nginx-container -- /bin/bash

In your shell, verify that Nginx is running:

 root@two-containers:/# apt-get update

root@two-containers:/# apt-get install curl procps

root@two-containers:/# ps aux 

The output is similar to this:

USER       PID ... STAT START   TIME COMMAND

root         1 ... Ss 21:12   0:00 nginx: master process nginx -g daemon off; 

Recall that the Debian Container created the index.html file in the Nginx root directory. Use curl  to send a GET request to the Nginx server:

root@two-containers:/# curl localhost 

The output shows that Nginx serves a web page written by the Debian container:

Hello from the debian container

Browse Categories

...