Contents

Managing PostgreSQL Credentials in Docker

A Practical Guide to Hardcoding, Environment Variables, and Terminal Injection with Flask

Website Visitors:

This article details the implementation of a Python and PostgreSQL application within Docker, focusing on three primary methods for managing credentials: hardcoding, environment files, and terminal injection. It concludes with a robust application implementation featuring data persistence and deletion capabilities.

Section 1: Project Foundation

The following static files define the application environment and dependencies. These files remain constant regardless of the credential management strategy used.

File Structure:

1
2
3
4
5
6
project/
├── Dockerfile
├── requirements.txt
├── docker-compose.yml
├── .env
└── app.py

1. Dockerfile Defines the Python environment and installs dependencies.

1
2
3
4
5
6
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "app.py"]

2. requirements.txt Specifies the Python libraries required for the web application and database connection.

1
2
Flask==2.3.3
psycopg2-binary==2.9.7

Section 2: Method A — Hardcoding Credentials

In this configuration, credentials are explicitly defined within the docker-compose.yml file. This approach is the simplest to implement but poses significant security risks as secrets are committed to version control.

Logic:

  1. The db service defines POSTGRES_USER, POSTGRES_PASSWORD, and POSTGRES_DB. Postgres uses these to initialize the database.
  2. The web service defines matching variables so the Python application can connect.
  3. The application reads these directly from os.environ.

docker-compose.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
services:
  db:
    image: postgres:15-alpine
    environment:
      # Hardcoded credentials
      POSTGRES_USER: myuser
      POSTGRES_PASSWORD: mypassword
      POSTGRES_DB: mydatabase
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  web:
    build: .
    ports:
      - "5000:5000"
    environment:
      # Hardcoded credentials passed to app
      DB_HOST: db
      DB_USER: myuser
      DB_PASSWORD: mypassword
      DB_NAME: mydatabase
    depends_on:
      - db

volumes:
  postgres_data:

app.py (Basic Version)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
import os
import psycopg2
from flask import Flask, request, render_template_string, redirect, url_for

app = Flask(__name__)

def get_db_connection():
    conn = psycopg2.connect(
        host=os.environ['DB_HOST'],
        database=os.environ['DB_NAME'],
        user=os.environ['DB_USER'],
        password=os.environ['DB_PASSWORD']
    )
    return conn

HTML_TEMPLATE = '''
<!doctype html>
<title>Simple Postgres App</title>
<h1>Enter Data</h1>
<form method="post">
  <input type="text" name="content" placeholder="Enter info" required>
  <button type="submit">Submit</button>
</form>
<h2>Saved Data</h2>
<ul>
{% for item in items %}
  <li>{{ item[0] }}</li>
{% endfor %}
</ul>
'''

@app.route('/', methods=['GET', 'POST'])
def index():
    conn = get_db_connection()
    cur = conn.cursor()
    
    cur.execute('CREATE TABLE IF NOT EXISTS info (content TEXT)')
    
    if request.method == 'POST':
        content = request.form['content']
        cur.execute('INSERT INTO info (content) VALUES (%s)', (content,))
        conn.commit()
        cur.close()
        conn.close()
        return redirect(url_for('index'))
    
    cur.execute('SELECT * FROM info')
    items = cur.fetchall()
    
    cur.close()
    conn.close()
    
    return render_template_string(HTML_TEMPLATE, items=items)

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

Section 3: Method B — Environment File (.env)

This method externalizes secrets into a .env file. Docker Compose automatically reads this file for variable substitution. This keeps secrets out of the configuration file, which is standard practice for development workflows.

Logic:

  1. Create a .env file containing key-value pairs.
  2. Reference these keys in docker-compose.yml using ${VARIABLE} syntax.
  3. The application code remains unchanged; it continues to read os.environ.

.env

1
2
3
POSTGRES_USER=env_user
POSTGRES_PASSWORD=env_password
POSTGRES_DB=env_database

.gitignore

1
.env

docker-compose.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
services:
  db:
    image: postgres:15-alpine
    environment:
      # Variable substitution from .env file
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  web:
    build: .
    ports:
      - "5000:5000"
    environment:
      DB_HOST: db
      # Passing same variables to the web container
      DB_USER: ${POSTGRES_USER}
      DB_PASSWORD: ${POSTGRES_PASSWORD}
      DB_NAME: ${POSTGRES_DB}
    depends_on:
      - db

volumes:
  postgres_data:

Execution: Because the database schema or user may have changed from Method A, clear the volume before running:

1
2
docker-compose down -v
docker-compose up --build

Section 4: Method C — Terminal Injection

Credentials are passed directly via the command line. Shell environment variables take precedence over the .env file. This is ideal for CI/CD pipelines or ephemeral deployments where secrets are injected at runtime.

Logic:

  1. Define variables in the shell before the docker-compose command.
  2. Docker Compose prioritizes these shell variables over the .env file.
  3. No configuration files are modified on disk.

Execution:

Linux/Mac:

1
2
docker-compose down -v
POSTGRES_USER=term_user POSTGRES_PASSWORD=term_password POSTGRES_DB=term_database docker-compose up --build

Windows PowerShell:

1
2
docker-compose down -v
$env:POSTGRES_USER="term_user"; $env:POSTGRES_PASSWORD="term_password"; $env:POSTGRES_DB="term_database"; docker-compose up --build

The docker-compose.yml file from Method B is used unchanged. Docker resolves ${POSTGRES_USER} to the value provided in the terminal command (term_user).


Section 5: Final Application Logic

The final application extends the basic version to support data deletion and prevent duplicate entries on page refresh (Post-Redirect-Get pattern). The database schema is updated to include a unique id column, allowing precise deletion of records.

Key Features:

  1. Primary Key: Adds id SERIAL PRIMARY KEY to the schema.
  2. Post-Redirect-Get: The POST handler redirects to the GET handler, preventing form resubmission on refresh.
  3. Delete Route: A new route handles deletion requests for specific IDs.
  4. HTML Interface: Includes a “Delete” button for each row.

app.py (Final Version)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
import os
import psycopg2
from flask import Flask, request, render_template_string, redirect, url_for

app = Flask(__name__)

def get_db_connection():
    conn = psycopg2.connect(
        host=os.environ['DB_HOST'],
        database=os.environ['DB_NAME'],
        user=os.environ['DB_USER'],
        password=os.environ['DB_PASSWORD']
    )
    return conn

HTML_TEMPLATE = '''
<!doctype html>
<title>Simple Postgres App</title>
<h1>Enter Data</h1>
<form method="post">
  <input type="text" name="content" placeholder="Enter info" required>
  <button type="submit">Submit</button>
</form>
<h2>Saved Data</h2>
<ul>
{% for item in items %}
  <li>
    {{ item[1] }} 
    <form action="/delete/{{ item[0] }}" method="post" style="display:inline;">
      <button type="submit">Delete</button>
    </form>
  </li>
{% endfor %}
</ul>
'''

@app.route('/', methods=['GET', 'POST'])
def index():
    conn = get_db_connection()
    cur = conn.cursor()
    
    # Schema includes 'id' for deletion targeting
    cur.execute('CREATE TABLE IF NOT EXISTS info (id SERIAL PRIMARY KEY, content TEXT)')
    
    if request.method == 'POST':
        content = request.form['content']
        cur.execute('INSERT INTO info (content) VALUES (%s)', (content,))
        conn.commit()
        cur.close()
        conn.close()
        return redirect(url_for('index'))
    
    # Fetches id and content
    cur.execute('SELECT id, content FROM info')
    items = cur.fetchall()
    
    cur.close()
    conn.close()
    
    return render_template_string(HTML_TEMPLATE, items=items)

@app.route('/delete/<int:item_id>', methods=['POST'])
def delete(item_id):
    conn = get_db_connection()
    cur = conn.cursor()
    cur.execute('DELETE FROM info WHERE id = %s', (item_id,))
    conn.commit()
    cur.close()
    conn.close()
    return redirect(url_for('index'))

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

Execution: Because this version introduces a new schema structure, clear existing data and rebuild:

1
2
3
docker-compose down -v
docker-compose build --no-cache
docker-compose up

Conclusion

This implementation demonstrates the complete lifecycle of a containerized database application, from infrastructure definition to dynamic data manipulation. We established three distinct methods for credential management:

  1. Hardcoding: Useful for rapid prototyping but unsuitable for production due to security risks.
  2. Environment Files: The standard for keeping configuration separate from code, facilitating cleaner version control.
  3. Terminal Injection: The most secure method for ephemeral or automated environments, preventing secrets from touching the disk.

Beyond configuration, the application logic addressed two critical web-development requirements: the Post-Redirect-Get pattern to prevent duplicate data insertion on refresh, and schema design using primary keys to enable precise record deletion.

When transitioning between these methods or altering the database schema, always remember to clear the persistent volume using docker-compose down -v to ensure a clean initialization state. This workflow provides a reproducible foundation for developing more complex containerized systems.

Your inbox needs more DevOps articles.

Subscribe to get our latest content by email.