Leave a comment

Heathrow Airport Departures Hacked

kiosk in airport…one more hacked airport

The Heathrow Airport of London is considered one of the most secure international airport of the world but nevertheless it has been hacked.

Recently the increasing terrorist threats, that make it an high risk target, brought to make more strict the security policy in force, with very accurate check-in procedures and continuous vigilance within the whole airports perimeter. In fact the level of attention for what concerns the physical security is sufficiently high, not the same seems to be for the information structures and IT security, as for instance at the Heathrow Airport Departures (but the same is valid for the Heathrow Terminals International Arrivals) few kiosks were been hacked.

One of our users has lately found something very interesting about that, in particular about the kiosks in airport, just like a hacked airport game, airport madness at the kiosk, customers data and privacy data were affected by a 0-day vulnerability.

Recently Honest (this is the nick of our affectionate user) has discovered (at London Heathrow Terminal 5) by a chance that the information/internet system used by customers of the London airport is not so safe, but easily vulnerable to hacker attacks (in the literally meaning, not in the overused and misinterpreted media one). Let’s start from the beginning.

Few days ago Honest contacts me to ask if we’d be interested to public an article on what he defines a real “scoop” about Heathrow Airport Terminals IT security. One more hacked airport.

Obviously this raised my curiosity and so Honest starts to tell me how he has found just for chance that some of the pc of Heathrow are highly exposed to external breach’s risks. In fact, he keep saying, on these pc (that are most probably installed by external providers in free concession) anyone could conduct different type of digital attacks that aim to take the machine’s control and turn it in a internet bot or a bridge, with a serious information security risk for whom used that machine and left sensitive data on it.

These computers are, indeed, dedicated to the customers use to surf paying through credit card. It’s easy to think, thou, that every user would inset the personal credit card code, log in an email account, digit password and so on; sensitive data that, as the machine’s low protection, would be easily grabbed and used for wrongful scopes. Honest assures to be able to provide the necessary acknowledgement to demonstrate what he said about the “hacked airport games” at the kiosk in airport .

Granted that I don’t know personally Honest and, before this occasion, we have never been in touch. About him I know just what he has told of himself: Honest is italian and works in the IT security field.

For this reason I didn’t consider immediately reliable his information, or at least give myself the benefit of the doubt, asking to our “sneak” for more details. Therefore Honest gives me the link of an image on a foreign server that represents a incriminated machine’s screenshot; clearly this is not sufficient to give plain credit to his story, that still needs a due check.

But in the deep, how does Honest discover this information leak of the system of one of the most important airport in the world? In the barest way,let’s say: Honest, transiting in the london hub, decides to use one of the computer to surf and perchance gets access to a window of Internet Explorer, simply because the software generates an unexpected pop-up after an error, bypassing in this way the dedicated portal that it’s supposed to inhibit the execution of the other programs of the pc.

By that the Honest’s curiosity takes soon the upper hand: he starts to verify a set of conditions; at the end of his “exploration” he is totally surprised by the exposition level of a machine that can potentially cause a big damage to the information security of London’s Heathrow airport.

Those terminals  are not owned by the airport itself, but by an external company provider (http://www.spectruminteractive.co.uk) that sells the service, but this doesn’t change the responsibility of whom proposes the service to the public of the customers (passengers and airport personnel).

Honest, indeed, clears that on the machine used by him it’s not only possible “getting around” the access, surfing for free instead that paying, but also installing different type of software, having access to the file system, turning it in a bot to intercept IP traffic or sniff access credential, or in a bridge with the extern. It’s important to highlight once again the prime responsibility goes specifically to Spectrum Interactive that has provided the machines and keeps their maintenance.

This article comes up from an analysis conducted on some of the machines present in

Heathrow airport

, and so it’s absolutely not sure that the vulnerabilities found are valid for all the other machines installed by the same provider, Spectrum Interactive, for the airport and in general for all its other clients.

Below it’s reported the technical analysis made by Honest with the relative evidences that confirm the vulnerabilities found.

File listing:

Through Internet Explorer it’s possible to access to all the hacked airport’s computer files.

Information Disclosure:

Some of the files used for the system deployment contain useful information to conduct more sophisticated attacks at the Heathrow airport.

Command execution: 

It was possible to execute commands on the machine through MsDos window.

Indeed, through a Gmail box dialog it’s was possible to upload the file, modify a link on the desktop in order to recall the file command.com.

Through the MsDos window it was allowed to visualize different computer information: IP address Computer’s name Installed softwares Windows Patch Moreover, always by using the internet explorer box dialog, it was possible installing softwares.

In the specific this technique permits to install: 

  • Keyloggers
  • Sniffer
  • Back Door
  • Malware
  • Etc…

Hacked Airport Remote Access 

The public address used by every machine is reachable remotely.

This means that external attacks are feasible and also the back door’s use can ensure the access by external users. These simple vulnerabilities make these computers completely under control of hypothetical bad-intentioned users and represent a big problem for the privacy of all the unaware internet users within the airport.

At the moment of the publication of this article, the security managers of Heathrow and of the service provider company have been already alerted of the case, as Honest affirms. Therefore probably these machines have been just dismissed. (kiosk in airport)

Moral of the hacked airport story:

We hope that after the publication of this article Heathrow’s management will realize that protecting their users, even from the Information technology point of view, is just as important as ensuring their physical security; therefore we strongly hope that the Spectrum Interactive will increase the level of attention providing their products through adopting more accurate security checks.

We consider important editing this article with the aim of warn all those who, through PC whose the level of security is not known or sure, access to their e-mail account or simply introduce sensitive data, like their credit card number, to brows in Internet or make online transactions.

Indeed, you may enter your data on computers, such as those ones in Heathrow, which could be used by ICT expert to conduct actions absolutely illegitimate or not legal. It’s very important that everyone become aware of the main issues relating to information security and is educated to adopt a greater attention just as it happens in other dangerous situations proper of everyday’s life.

Our thank is granted to Honest for the opportunity he gave us, through RedOracle.com, to highlight this issue and disclose the information as educational. This is not only applicable to Heathrow Airport Terminals and kiosk security (hacked airport games), but also to any public kiosk in other location.

 

airport hacked evidence 1 airport hacked evidence 2 airport hacked evidence 3 airport hacked evidence 4 airport hacked evidence 5

 

Tag: Airport Retail Kiosks, Airport Kiosk Business, Airport Life Hacks, Air Travel Hacks

 

Related Articles:

TheRegister

Password DB

Leave a comment

Docker CheatSheet

Patterns for Continuous Integration with Docker on Travis CI

https://medium.com/mobileforgood/patterns-for-continuous-integration-with-docker-on-travis-ci-71857fff14c5

How To Use Docker

How To Use Docker

“With Docker, developers can build any app in any language using any toolchain. “Dockerized” apps are completely portable and can run anywhere – colleagues’ OS X and Windows laptops, QA servers running Ubuntu in the cloud, and production data center VMs running Red Hat.

Developers can get going quickly by starting with one of the 13,000+ apps available on Docker Hub. Docker manages and tracks changes and dependencies, making it easier for sysadmins to understand how the apps that developers build work. And with Docker Hub, developers can automate their build pipeline and share artifacts with collaborators through public or private repositories.

Docker helps developers build and ship higher-quality applications, faster.” — What is Docker

Tip:

I use Oh My Zsh with the Docker plugin for autocompletion of docker commands. YMMV

 

Once you have docker installed, its intuitive usage experience makes it very easy to work with. By now, you shall have the docker daemon running in the background.

If not, use the following command to run the docker daemon.

To run the docker daemon:

Usage Syntax:

Using docker (via CLI) consists of passing it a chain of options and commands followed by arguments. Please note that docker needs sudo privileges in order to work.

Note: Below instructions and explanations are provided to be used as a guide and to give you an overall idea of using and working with docker. The best way to get familiar with it is practice on a new VPS. Do not be afraid of breaking anything– in fact, do break things! With docker, you can save your progress and continue from there very easily.

Beginning

Let’s begin with seeing all available commands docker have.

Ask docker for a list of all available commands:

All currently (as of 0.7.1) available commands:

Check out system-wide information and docker version:

Working with Images

As we have discussed at length, the key to start working with any docker container is using images. There are many freely available images shared across docker image index and the CLI allows simple access to query the image repository and to download new ones.

When you are ready, you can also share your image there as well. See the section on “push” further down for details.

Searching for a docker image:*

This will provide you a very long list of all available images matching the query: Ubuntu.

Downloading (PULLing) an image:

Either when you are building / creating a container or before you do, you will need to have an image present at the host machine where the containers will exist. In order to download images (perhaps following “search”) you can execute pull to get one.

Listing images:

All the images on your system, including the ones you have created by committing (see below for details), can be listed using “images”. This provides a full list of all available ones.

Committing changes to an image:

As you work with a container and continue to perform actions on it (e.g. download and install software, configure files etc.), to have it keep its state, you need to “commit”. Committing makes sure that everything continues from where they left next time you use one (i.e. an image).

Sharing (PUSHing) images:

Although it is a bit early at this moment – in our article, when you have created your own container which you would like to share with the rest of the world, you can use push to have your image listed in the index where everybody can download and use.

Please remember to “commit” all your changes.

Note: You need to sign-up at index.docker.io to push images to docker index.

Working with Containers

When you “run” any process using an image, in return, you will have a container. When the process is notactively running, this container will be a non-running container. Nonetheless, all of them will reside on your system until you remove them via rm command.

Listing all current containers:

By default, you can use the following to list all running containers:

To have a list of both running and non-running ones, use:

Creating a New Container

It is currently not possible to create a container without running anything (i.e. commands). To create a new container, you need to use a base image and specify a command to run.

This will output “hello” and you will be right back where you were. (i.e. your host’s shell)

As you can not change the command you run after having created a container (hence specifying one during “creation”), it is common practice to use process managers and even custom launch scripts to be able to execute different commands.

Running a container:

When you create a container and it stops (either due to its process ending or you stopping it explicitly), you can use “run” to get the container working again with the same command used to create it.

Remember how to find the containers? See above section for listing them.

Stopping a container:

To stop a container’s process from running:

Saving (committing) a container:

If you would like to save the progress and changes you made with a container, you can use “commit” as explained above to save it as an image.

This command turns your container to an image.

Remember that with docker, commits are cheap. Do not hesitate to use them to create images to save your progress with a
container or to roll back when you need (e.g. like snapshots in time).

Removing / Deleting a container:

Using the ID of a container, you can delete one with rm.

You can learn more about Docker by reading their official documentation

Remember: Things are progressing very fast at docker. The momentum powered by the community is amazing and many large companies try to join in offering support. However, the product is still not labeled as production ready, hence not recommended to be 100% trusted with mission critical deployments – yet. Be sure to check releases as they come out and continue keeping on top of all things docker.

How To Install Docker step by step

How To Install and Use Docker: Getting Started

Introduction

The docker project offers higher-level tools, working together, which are built on top of some Linux kernel features. The goal is to help developers and system administrators port applications – with all of their dependencies conjointly – and get them running across systems and machines – with no troubles.

Docker achieves this by creating safe, LXC (i.e. Linux Containers) based environments for applications called docker containers. Creating containers using docker images, which can be built either by executing commands manually or automatically through Dockerfiles.

Docker is here to offer you an efficient, speedy way to port applications across systems and machines. It is light and lean, allowing you to quickly contain applications and run them within their own secure environments (via Linux Containers: LXC).

In this DigitalOcean article, we aim to thoroughly introduce you to Docker: one of the most exciting and powerful open-source projects to come to life in the recent years. Docker can help you with so much it’s unfair to attempt to summarise its capabilities in one sentence.

Use cases are limitless and the need has always been there.

Glossary

1. Docker

2. The Docker Project and its Main Parts

3. Docker Elements

  1. Docker Containers
  2. Docker Images
  3. Dockerfiles

4. How to Install Docker

5. How To Use Docker

  1. Beginning
  2. Working with Images
  3. Working with Containers

Docker

Whether it be from your development machine to a remote server for production, or packaging everything for use elsewhere, it is always a challenge when it comes to porting your application stack together with its dependencies and getting it to run without hiccups. In fact, the challenge is immense and solutions so far have not really proved successful for the masses.

In a nutshell, docker as a project offers you the complete set of higher-level tools to carry everything that forms an application across systems and machines – virtual or physical – and brings along loads more of great benefits with it.

Docker achieves its robust application (and therefore, process and resource) containment via Linux Containers (e.g. namespaces and other kernel features). Its further capabilities come from a project’s own parts and components, which extract all the complexity of working with lower-level linux tools/APIs used for system and application management with regards to securely containing processes.

The Docker Project and its Main Parts

Docker project (open-sourced by dotCloud in March ’13) consists of several main parts (applications) and elements (used by these parts) which are all [mostly] built on top of already existing functionality, libraries and frameworks offered by the Linux kernel and third-parties (e.g. LXC, device-mapper, aufs etc.).

Main Docker Parts

  1. docker daemon: used to manage docker (LXC) containers on the host it runs
  2. docker CLI: used to command and communicate with the docker daemon
  3. docker image index: a repository (public or private) for docker images

Main Docker Elements

  1. docker containers: directories containing everything-your-application
  2. docker images: snapshots of containers or base OS (e.g. Ubuntu) images
  3. Dockerfiles: scripts automating the building process of images

Docker Elements

The following elements are used by the applications forming the docker project.

Docker Containers

The entire procedure of porting applications using docker relies solely on the shipment of containers.

Docker containers are basically directories which can be packed (e.g. tar-archived) like any other, then shared and run across various different machines and platforms (hosts). The only dependency is having the hosts tuned to run the containers (i.e. have docker installed). Containment here is obtained via Linux Containers (LXC).

LXC (Linux Containers)

Linux Containers can be defined as a combination various kernel-level features (i.e. things that Linux-kernel can do) which allow management of applications (and resources they use) contained within their own environment. By making use of certain features (e.g. namespaces, chroots, cgroups and SELinux profiles), the LXC contains application processes and helps with their management through limiting resources, not allowing reach beyond their own file-system (access to the parent’s namespace) etc.

Docker with its containers makes use of LXC, however, also brings along much more.

Docker Containers

Docker containers have several main features.

They allow;

  • Application portability
  • Isolating processes
  • Prevention from tempering with the outside
  • Managing resource consumption

and more, requiring much less resources than traditional virtual-machines used for isolated application deployments.

They do not allow;

  • Messing with other processes
  • Causing “dependency hell”
  • Or not working on a different system
  • Being vulnerable to attacks and abuse all system’s resources

and (also) more.

Being based and depending on LXC, from a technical aspect, these containers are like a directory (but a shaped and formatted one). This allows portability and gradual builds of containers.

Each container is layered like an onion and each action taken within a container consists of putting another block (which actually translates to a simple change within the file system) on top of the previous one. And various tools and configurations make this set-up work in a harmonious way altogether (e.g. union file-system).

What this way of having containers allows is the extreme benefit of easily launching and creating new containers and images, which are thus kept lightweight (thanks to gradual and layered way they are built). Since everything is based on the file-system, taking snapshots and performing roll-backs in time are cheap(i.e. very easily done / not heavy on resources), much like version control systems (VCS).

Each docker container starts from a docker image which forms the base for other applications and layers to come.

Docker Images

Docker images constitute the base of docker containers from which everything starts to form. They are very similar to default operating-system disk images which are used to run applications on servers or desktop computers.

Having these images (e.g. Ubuntu base) allow seamless portability across systems. They make a solid, consistent and dependable base with everything that is needed to run the applications. When everything is self-contained and the risk of system-level updates or modifications are eliminated, the container becomes immune to external exposures which could put it out of order – preventing the dependency hell.

As more layers (tools, applications etc.) are added on top of the base, new images can be formed bycommitting these changes. When a new container gets created from a saved (i.e. committed) image, things continue from where they left off. And the union file system, brings all the layers together as a single entity when you work with a container.

These base images can be explicitly stated when working with the docker CLI to directly create a new container or they might be specified inside a Dockerfile for automated image building.

Dockerfiles

Dockerfiles are scripts containing a successive series of instructions, directions, and commands which are to be executed to form a new docker image. Each command executed translates to a new layer of the onion, forming the end product. They basically replace the process of doing everything manually and repeatedly. When a Dockerfile is finished executing, you end up having formed an image, which then you use to start (i.e. create) a new container.

How To Install Docker

At first, docker was only available on Ubuntu. Nowadays, it is possible to deploy docker on RHEL based systems (e.g. CentOS) and others as well.

Let’s quickly go over the installation process for Ubuntu.

Note: Docker can be installed automatically on your Droplet by adding this script to its User Data when launching it. Check out this tutorial to learn more about Droplet User Data.

Installation Instructions for Ubuntu

The simplest way to get docker, other than using the pre-built application image, is to go with a 64-bit Ubuntu 14.04 VPS

Update your droplet:

Make sure aufs support is available:

Add docker repository key to apt-key for package verification:

Add the docker repository to Apt sources:

Update the repository with the new addition:

Finally, download and install docker:

Ubuntu’s default firewall (UFW: Uncomplicated Firewall) denies all forwarding traffic by default, which is needed by docker.

Enable forwarding with UFW:

Edit UFW configuration using the nano text editor.

Scroll down and find the line beginning with DEFAULTFORWARDPOLICY.

Replace:

With:

Press CTRL+X and approve with Y to save and close.

Finally, reload the UFW:

Read our post on how to use Docker here and please 

check out docker’s documentation for installation

or for a full set of instructions here

Docker Memcached step by step

Memcached on Docker

Memcached is a distributed, open-source data storage engine. It was designed to store certain types of data in RAM (instead of slower rate traditional disks) for very fast retrievals by applications, cutting the amount of time it takes to process requests by reducing the number of queries performed against heavier datasets or APIs such as traditional databases (e.g. MySQL).

By introducing a smart, well-planned, and optimized caching mechanism, it becomes possible to handle a seemingly larger amount of requests and perform more procedures by applications. This is the most important use case of Memcached, as it is with any other caching application or component.

Heavily relied upon and used in production for web sites and various other applications, Memcached has become one of the go-to tools for increasing performance without -necessarily – needing to utilize further hardware (e.g. more servers or server resources).

It works by storing keys and their matching values (up to 1 MB in size) onto an associative array (i.e. hash table) which can be scaled and distributed across a large number of virtual servers.

Installing Docker on Ubuntu (Latest)

To start using the Docker project on your VPS, you can either use DigitalOcean’s docker image for Ubuntu 13.04 or install it yourself. In this section, we will quickly go over the basic installation instructions for Docker 0.7.1.

Installation Instructions for Ubuntu

Update your droplet:

Make sure aufs support is available:

Add docker repository key to apt-key for package verification:

Add the docker repository to aptitude sources:

Update the repository with the new addition:

Finally, download and install docker:

Ubuntu’s default firewall (UFW: Uncomplicated Firewall) denies all forwarding traffic by default, which is needed by docker.

Enable forwarding with UFW:

Edit UFW configuration using the nano text editor.

Scroll down and find the line beginning with DEFAULTFORWARDPOLICY.

Replace:

With:

Press CTRL+X and approve with Y to save and close.

Finally, reload the UFW:

Basic Docker Commands

Before we begin working with docker, let’s quickly go over its available commands to refresh our memory from our first Getting Started article.

Running the docker daemon and CLI Usage

Upon installation, the docker daemon should be running in the background, ready to accept commands sent by the docker CLI. For certain situations where it might be necessary to manually run docker, use the following.

Running the docker daemon:

docker CLI Usage:

Note: docker needs sudo privileges in order to work.

Commands List

Here is a summary of currently available (version 0.7.1) docker commands:

attach

Attach to a running container

build

Build a container from a Dockerfile

commit

Create a new image from a container’s changes

cp

Copy files/folders from the containers filesystem to the host path

diff

Inspect changes on a container’s filesystem

events

Get real time events from the server

export

Stream the contents of a container as a tar archive

history

Show the history of an image

images

List images

import

Create a new filesystem image from the contents of a tarball

info

Display system-wide information

insert

Insert a file in an image

inspect

Return low-level information on a container

kill

Kill a running container

load

Load an image from a tar archive

login

Register or Login to the docker registry server

logs

Fetch the logs of a container

port

Lookup the public-facing port which is NAT-ed to PRIVATE_PORT

ps

List containers

pull

Pull an image or a repository from the docker registry server

push

Push an image or a repository to the docker registry server

restart

Restart a running container

rm

Remove one or more containers

rmi

Remove one or more images

run

Run a command in a new container

save

Save an image to a tar archive

Search for an image in the docker index

start

Start a stopped container

stop

Stop a running container

tag

Tag an image into a repository

top

Lookup the running processes of a container

version

Show the docker version information

Getting Started with Creating Memcached Images

Building on our knowledge gained from the previous articles in the docker series, let’s dive straight into building a Dockerfile to have docker automatically build Memcached installed images (which will be used to run sandboxed Memcached instances).

Quick Recap: What Are Dockerfiles?

Dockerfiles are scripts containing commands declared successively which are to be executed, in the order given, by docker to automatically create a new docker image. They help greatly with deployments.

These files always begin with the definition of a base image by using the FROM command. From there on, the build process starts and each following action taken forms the final with commits (saving the image state) on the host.

Usage:

Note: To learn more about Dockerfiles, check out our article: Docker Explained: Using Dockerfiles to Automate Building of Images.

Dockerfile Commands Overview

Add

Copy a file from the host into the container

CMD

Set default commands to be executed, or passed to the ENTRYPOINT

ENTRYPOINT

Set the default entrypoint application inside the container

ENV

Set environment variable (e.g. “key = value”)

EXPOSE

Expose a port to outside

FROM

Set the base image to use

MAINTAINER

Set the author / owner data of the Dockerfile

RUN

Run a command and commit the ending result (container) image

USER

Set the user to run the containers from the image

VOLUME

Mount a directory from the host to the container

WORKDIR

Set the directory for the directives of CMD to be executed

Creating a Dockerfile

Since Dockerfiles constitute of plain-text documents, creating one translates to launching your favourite text editor and writing the commands you want docker to execute in order to build an image. After you start working on the file, continue with adding all the content below (one after the other) before saving the final result.

Note: You can find what the final Dockerfile will look like at the end of this section.

Let’s create an empty Dockerfile using nano text editor:

We need to have all instructions (commands) and directives listed successively. However, everything starts with building on a base image (set with the FROM command).

Let’s define the purpose of our Dockerfile and declare the base image to use:

After this initial block of commands and declarations, we can begin with listing the instructions for Memcached installation.

Set the default port to be exposed to outside the container:

Set the default execution command and entrpoint (i.e. Memcached daemon):

Final Dockerfile

After having everything written inside the Dockerfile, save it and exit by pressing CTRL+X followed by Y.

Using this Dockerfile, we are ready to get started with dockerised Memcached containers!

Creating the Docker Image for Memcached Containers

We can now create our first Memcached image by following the usage instructions explained in the Dockerfile Basics section.

Run the following command to create an image, tagged as “memcached_img”:

Note: Do not forget the trailing . for docker to find the Dockerfile.

Running dockerised Memcached Containers

It is very simple to create any number of perfectly isolated and self-contained memcached instances – now– thanks to the image we have obtained in the previous section. All we have to do is to create a new container with docker run.

Creating a Memcached Installed Container

To create a new container, use the following command, modifying it to suit your requirements following this example:

Now we will have a docker container named “memcachedins”, accessible from port 45001, run using our image tagged “memcachedimg”, which we built previously.

Limiting the Memory for a Memcached Container

In order to limit the amount of memory a docker container process can use, simply set the -m [memory amount] flag with the limit.

To run a container with memory limited to 256 MBs:

To confirm the memory limit, you can inspect the container:

Note: The command above will grab the memory related information from the inspection output. To see all the relevant information regarding your container, opt for sudo docker inspect [container ID].

Testing the Memcached Container

There are various ways to try your newly created Memcached running container(s). We will use a simple Python CLI application for this. However, you can just get to production with your application using caching add-ons, frameworks, or libraries.

Make sure that your host has the necessary libraries for Python / Memcached:

Let’s create a simple Python script called “mc.py” using nano:

Copy-and-paste the below (self-explanatory) content inside:

Press CTRL+X and approve with Y to save and close.

Testing a docker memcached instance using the script above from your host:

For the full set of instructions to install and use docker, check out the docker documentation at docker.io.

How To install and use docker – Containers getting started

Introduction

The provided use cases are limitless and the need has always been there. Docker is here to offer you an efficient, speedy way to port applications across systems and machines. It is light and lean, allowing you to quickly contain applications and run them within their own secure environments (via Linux Containers: LXC).

In this DigitalOcean article, we aim to thoroughly introduce you to Docker: one of the most exciting and powerful open-source projects to come to life in the recent years. Docker can help you with so much it’s unfair to attempt to summarize its capabilities in one sentence.

Glossary

1. Docker

2. The Docker Project and its Main Parts

3. Docker Elements

  1. Docker Containers
  2. Docker Images
  3. Dockerfiles

4. How to Install Docker

5. How To Use Docker

  1. Beginning
  2. Working with Images
  3. Working with Containers

Docker

Whether it be from your development machine to a remote server for production, or packaging everything for use elsewhere, it is always a challenge when it comes to porting your application stack together with its dependencies and getting it to run without hiccups. In fact, the challenge is immense and solutions so far have not really proved successful for the masses.

In a nutshell, docker as a project offers you the complete set of higher-level tools to carry everything that forms an application across systems and machines – virtual or physical – and brings along loads more of great benefits with it.

Docker achieves its robust application (and therefore, process and resource) containment via Linux Containers (e.g. namespaces and other kernel features). Its further capabilities come from a project’s own parts and components, which extract all the complexity of working with lower-level linux tools/APIs used for system and application management with regards to securely containing processes.

The Docker Project and its Main Parts

Docker project (open-sourced by dotCloud in March ’13) consists of several main parts (applications) and elements (used by these parts) which are all [mostly] built on top of already existing functionality, libraries and frameworks offered by the Linux kernel and third-parties (e.g. LXC, device-mapper, aufs etc.).

Main Docker Parts

  1. docker daemon: used to manage docker (LXC) containers on the host it runs
  2. docker CLI: used to command and communicate with the docker daemon
  3. docker image index: a repository (public or private) for docker images

Main Docker Elements

  1. docker containers: directories containing everything-your-application
  2. docker images: snapshots of containers or base OS (e.g. Ubuntu) images
  3. Dockerfiles: scripts automating the building process of images

Docker Elements

The following elements are used by the applications forming the docker project.

Docker Containers

The entire procedure of porting applications using docker relies solely on the shipment of containers.

Docker containers are basically directories which can be packed (e.g. tar-archived) like any other, then shared and run across various different machines and platforms (hosts). The only dependency is having the hosts tuned to run the containers (i.e. have docker installed). Containment here is obtained via Linux Containers (LXC).

LXC (Linux Containers)

Linux Containers can be defined as a combination various kernel-level features (i.e. things that Linux-kernel can do) which allow management of applications (and resources they use) contained within their own environment. By making use of certain features (e.g. namespaces, chroots, cgroups and SELinux profiles), the LXC contains application processes and helps with their management through limiting resources, not allowing reach beyond their own file-system (access to the parent’s namespace) etc.

Docker with its containers makes use of LXC, however, also brings along much more.

Docker Containers

Docker containers have several main features.

They allow;

  • Application portability
  • Isolating processes
  • Prevention from tempering with the outside
  • Managing resource consumption

and more, requiring much less resources than traditional virtual-machines used for isolated application deployments.

They do not allow;

  • Messing with other processes
  • Causing “dependency hell”
  • Or not working on a different system
  • Being vulnerable to attacks and abuse all system’s resources

and (also) more.

Being based and depending on LXC, from a technical aspect, these containers are like a directory (but a shaped and formatted one). This allows portability and gradual builds of containers.

Each container is layered like an onion and each action taken within a container consists of putting another block (which actually translates to a simple change within the file system) on top of the previous one. And various tools and configurations make this set-up work in a harmonious way altogether (e.g. union file-system).

What this way of having containers allows is the extreme benefit of easily launching and creating new containers and images, which are thus kept lightweight (thanks to gradual and layered way they are built). Since everything is based on the file-system, taking snapshots and performing roll-backs in time are cheap(i.e. very easily done / not heavy on resources), much like version control systems (VCS).

Each docker container starts from a docker image which forms the base for other applications and layers to come.

Docker Images

Docker images constitute the base of docker containers from which everything starts to form. They are very similar to default operating-system disk images which are used to run applications on servers or desktop computers.

Having these images (e.g. Ubuntu base) allow seamless portability across systems. They make a solid, consistent and dependable base with everything that is needed to run the applications. When everything is self-contained and the risk of system-level updates or modifications are eliminated, the container becomes immune to external exposures which could put it out of order – preventing the dependency hell.

As more layers (tools, applications etc.) are added on top of the base, new images can be formed by committing these changes. When a new container gets created from a saved (i.e. committed) image, things continue from where they left off. And the union file system, brings all the layers together as a single entity when you work with a container.

These base images can be explicitly stated when working with the docker CLI to directly create a new container or they might be specified inside a Dockerfile for automated image building.

Dockerfiles

Dockerfiles are scripts containing a successive series of instructions, directions, and commands which are to be executed to form a new docker image. Each command executed translates to a new layer of the onion, forming the end product. They basically replace the process of doing everything manually and repeatedly. When a Dockerfile is finished executing, you end up having formed an image, which then you use to start (i.e. create) a new container.

How To Install Docker

At first, docker was only available on Ubuntu. Nowadays, it is possible to deploy docker on RHEL based systems (e.g. CentOS) and others as well.

Let’s quickly go over the installation process for Ubuntu.

Note: Docker can be installed automatically on your Droplet by adding this script to its User Data when launching it. Check out this tutorial to learn more about Droplet User Data.

Installation Instructions for Ubuntu

The simplest way to get docker, other than using the pre-built application image, is to go with a 64-bit Ubuntu 14.04 VPS

Update your droplet:

Make sure aufs support is available:

Add docker repository key to apt-key for package verification:

Add the docker repository to Apt sources:

Update the repository with the new addition:

Finally, download and install docker:

Ubuntu’s default firewall (UFW: Uncomplicated Firewall) denies all forwarding traffic by default, which is needed by docker.

Enable forwarding with UFW:

Edit UFW configuration using the nano text editor.

Scroll down and find the line beginning with DEFAULTFORWARDPOLICY.

Replace:

With:

Press CTRL+X and approve with Y to save and close.

Finally, reload the UFW:

For a full set of instructions, check out docker documentation for installation here.

How To Use Docker

Once you have docker installed, its intuitive usage experience makes it very easy to work with. By now, you shall have the docker daemon running in the background. If not, use the following command to run the docker daemon.

To run the docker daemon:

Usage Syntax:

Using docker (via CLI) consists of passing it a chain of options and commands followed by arguments. Please note that docker needs sudo privileges in order to work.

Note: Below instructions and explanations are provided to be used as a guide and to give you an overall idea of using and working with docker. The best way to get familiar with it is practice on a new VPS. Do not be afraid of breaking anything– in fact, do break things! With docker, you can save your progress and continue from there very easily.

Beginning

Let’s begin with seeing all available commands docker have.

Ask docker for a list of all available commands:

All currently (as of 0.7.1) available commands:

Check out system-wide information and docker version:

Working with Images

As we have discussed at length, the key to start working with any docker container is using images. There are many freely available images shared across docker image index and the CLI allows simple access to query the image repository and to download new ones.

When you are ready, you can also share your image there as well. See the section on “push” further down for details.

Searching for a docker image:*

This will provide you a very long list of all available images matching the query: Ubuntu.

Downloading (PULLing) an image:

Either when you are building / creating a container or before you do, you will need to have an image present at the host machine where the containers will exist. In order to download images (perhaps following “search”) you can execute pull to get one.

Listing images:

All the images on your system, including the ones you have created by committing (see below for details), can be listed using “images”. This provides a full list of all available ones.

Committing changes to an image:

As you work with a container and continue to perform actions on it (e.g. download and install software, configure files etc.), to have it keep its state, you need to “commit”. Committing makes sure that everything continues from where they left next time you use one (i.e. an image).

Sharing (PUSHing) images:

Although it is a bit early at this moment – in our article, when you have created your own container which you would like to share with the rest of the world, you can use push to have your image listed in the index where everybody can download and use.

Please remember to “commit” all your changes.

Note: You need to sign-up at index.docker.io to push images to docker index.

Working with Containers

When you “run” any process using an image, in return, you will have a container. When the process is notactively running, this container will be a non-running container. Nonetheless, all of them will reside on your system until you remove them via rm command.

Listing all current containers:

By default, you can use the following to list all running containers:

To have a list of both running and non-running ones, use:

Creating a New Container

It is currently not possible to create a container without running anything (i.e. commands). To create a new container, you need to use a base image and specify a command to run.

This will output “hello” and you will be right back where you were. (i.e. your host’s shell)

As you can not change the command you run after having created a container (hence specifying one during “creation”), it is common practice to use process managers and even custom launch scripts to be able to execute different commands.

Running a container:

When you create a container and it stops (either due to its process ending or you stopping it explicitly), you can use “run” to get the container working again with the same command used to create it.

Remember how to find the containers? See above section for listing them.

Stopping a container:

To stop a container’s process from running:

Saving (committing) a container:

If you would like to save the progress and changes you made with a container, you can use “commit” as explained above to save it as an image.

This command turns your container to an image.

Remember that with docker, commits are cheap. Do not hesitate to use them to create images to save your progress with a
container or to roll back when you need (e.g. like snapshots in time).

Removing / Deleting a container:

Using the ID of a container, you can delete one with rm.

You can learn more about Docker by reading their official documentation

Remember: Things are progressing very fast at docker. The momentum powered by the community is amazing and many large companies try to join in offering support. However, the product is still not labeled as production ready, hence not recommended to be 100% trusted with mission critical deployments – yet. Be sure to check releases as they come out and continue keeping on top of all things docker.

Leave a comment

SSL interception advisory – Alert (TA17-075A)

The Security Impact of HTTPS Interception

TLS and its predecessor, Secure Sockets Layer (SSL), are important Internet protocols that encrypt communications over the Internet between the client and server, by making an identity chain using digital certificates to establish an identity chain showing that the connection is with a legitimate server verified by a trusted third-party certificate authority.

In order to work, therefore, an interception device must issue its own trusted certificate to client devices – or users would perpetually see warnings that their connection wasn’t secure.

HTTPS inspection works by intercepting the HTTPS network traffic and performing a man-in-the-middle (MiTM) attack on the connection. Browsers and alternative applications use this certificate to validate encrypted connections however that introduces 2 problems: 1st, it’s impracticable to verify public server’s certificate; but second, and additional significantly, the manner that the inspection product communicates with the online server becomes invisible to the user.

In alternative words, the user will solely make sure that their connection to the interception product is legit, however has no plan whether or not the rest of the communication – to the online server, over the internet – is secure or has been compromised.

And, it seems, several of these middleboxes and interception software package suites do a poor job of security themselves. several don’t properly verify the certificate chain of the server before re-encrypting and forwarding client information. Some do a poor job forwarding certificate-chain verification errors, keeping users within the dark over a attainable attack.

In alternative words: the trouble to visualize that a security system is functioning undermines the terribly security it’s presupposed to be checking.

Consider it as somebody exploit your front entrance wide open whereas they check that the key fits.

Following the academic article describing this issue:

Original Link here.

To verify whether your inspection product is performing the proper verification:

BADSSL

Please have also a look to the US-CRT Advisory

Leave a comment

Xperia Tablet Z2 – 23.4.A.1.232 / R5C – Root and Dual Recovery

23.4.A.1.232 / R5C CE3

now available for Sony Xperia Z2 Tablet (SGP521)

 

XperiFirm SGP521

Device: Xperia Z2 Tablet (SGP521)
CDA: 1282-0228
Market: CE3
Operator: Generic
Network: GLOBAL-LTE
Release: 23.4.A.1.232 / R5C

– SuperSU 2.52
– Dual Recovery

Installation
1) Use flashtool (Flashmode) to flash the ftf file on your SGP521, then unlock boot loader with flashtool and then enter android and enable USB debug
2) Use Flashtool (Fastboot mode) to flash the boot.img
3) Upload the SGP521_23.4.A.1.232_XZDRKernel2.8.21-RELEASE.flashable.zip and BETA-SuperSU-v2.52.zip on your SGP521 sdcard (external)
4) Reboot device and enter in recovery mode.
5) install XZDRKernel2.8.21
6) Reboot device and enter in recovery mode.
7) install SuperSU-v2.52
8) reboot and done!

Download links

FTF Magnet
Torrent

SGP521_23.4.A.1.232_CE3_R5C.ftf
Boot.img
SuperSU-v2.52
SGP521_23.4.A.1.232_XZDRKernel2.8.21

RedOracle XDA

Leave a comment

INFOSEK 2008

Leading information security event with KEVIN MITNICK – June, 9, 10 and 11, 2008 in Nova Gorica, Slovenia





June 9 – Preconference Day with KEVIN MITNICK, world’s most famous (former) hacker, today’s social engineering expert. Before he was caught by the FBI he gained unauthorized access to computer systems at some of the largest corporations on the planet.


Listen to his Interview
His lectures are sold out all over the world in advance! 

June 10 and 11 – INFOSEK 2008-FORUM Conference in cooperation with The European Network and Information Security Agency-ENISA; besides Slovenian speakers conference brings information security experts from all over Europe (England, Italy, Spain, Norway, etc.) in whole day sessions in English.

Please visit http://www.infosek.net/index.php?lang=2 and get to know more about the program. 
Unique opportunity to hear so famous hacker so close to you and meet him face to face!