26f6f6bc73c9Interpreting unsanitized user input as code allows a malicious user to perform arbitrary code execution.
introduction/mitre.py
213: # @authentication_decorator
214: @csrf_exempt
215: def mitre_lab_25_api(request):
216: if request.method == "POST":
217: expression = request.POST.get('expression')
>>> 218: result = eval(expression)
219: return JsonResponse({'result': result})
220: else:
221: return redirect('/mitre/25/lab/')
222:
223:
An attacker could execute arbitrary Python code on the server, potentially leading to complete system compromise, data theft, file system access, or launching further attacks against internal systems.
ControlFlowNode for request → ControlFlowNode for Attribute → ControlFlowNode for Attribute() → ControlFlowNode for expression → ControlFlowNode for expressiondef mitre_lab_25_api(request): → ControlFlowNode for request
expression = request.POST.get('expression') → ControlFlowNode for Attribute
expression = request.POST.get('expression') → ControlFlowNode for Attribute()
expression = request.POST.get('expression') → ControlFlowNode for expression
result = eval(expression) → ControlFlowNode for expression
Replace the unsafe eval() call with a safe expression evaluator. First, import ast.literal_eval which only evaluates literals, not arbitrary code. Then validate the input to ensure it contains only mathematical expressions using a whitelist of allowed characters (numbers, basic operators, parentheses). Finally, implement a custom safe evaluator or use a library like simpleeval for mathematical expressions only.
import ast
import re
from simpleeval import simple_eval
def mitre_lab_25_api(request):
if request.method == "POST":
expression = request.POST.get('expression', '')
# Whitelist allowed characters for mathematical expressions
if not re.match(r'^[\d\s+\-*/().]+$', expression):
return JsonResponse({'error': 'Invalid expression'}, status=400)
try:
# Use safe evaluation library
result = simple_eval(expression)
return JsonResponse({'result': result})
except Exception as e:
return JsonResponse({'error': 'Evaluation failed'}, status=400)
else:
return redirect('/mitre/25/lab/')
98a74aa8d9e6Using externally controlled strings in a command line may allow a malicious user to change the meaning of the command.
introduction/mitre.py
228: @authentication_decorator
229: def mitre_lab_17(request):
230: return render(request, 'mitre/mitre_lab_17.html')
231:
232: def command_out(command):
>>> 233: process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
234: return process.communicate()
235:
236:
237: @csrf_exempt
238: def mitre_lab_17_api(request):
An attacker could execute arbitrary system commands on the server by injecting shell metacharacters (like ;, &, |, or $(cmd)) in the IP parameter, potentially leading to complete system compromise, data theft, or server takeover.
ControlFlowNode for request → ControlFlowNode for Attribute → ControlFlowNode for Attribute() → ControlFlowNode for ip → ControlFlowNode for command → ControlFlowNode for command → ControlFlowNode for command → ControlFlowNode for commanddef mitre_lab_17_api(request): → ControlFlowNode for request
ip = request.POST.get('ip') → ControlFlowNode for Attribute
ip = request.POST.get('ip') → ControlFlowNode for Attribute()
ip = request.POST.get('ip') → ControlFlowNode for ip
command = "nmap " + ip → ControlFlowNode for command
res, err = command_out(command) → ControlFlowNode for command
def command_out(command): → ControlFlowNode for command
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) → ControlFlowNode for command
First, remove the `shell=True` parameter as it enables shell injection. Second, use a whitelist approach to validate the IP address input before constructing the command. Third, pass the command as a list of arguments instead of a string to prevent command injection. Finally, consider using a safer alternative like Python's `ipaddress` module for validation.
import ipaddress
import subprocess
def validate_ip_address(ip_str):
try:
ipaddress.ip_address(ip_str)
return True
except ValueError:
return False
def command_out_safe(ip):
if not validate_ip_address(ip):
raise ValueError("Invalid IP address")
# Pass command as list without shell=True
process = subprocess.Popen(['nmap', ip],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
return process.communicate()
# In mitre_lab_17_api function:
ip = request.POST.get('ip')
if validate_ip_address(ip):
res, err = command_out_safe(ip)
else:
return HttpResponse("Invalid IP address", status=400)
a858c9f4f6efInterpreting unsanitized user input as code allows a malicious user to perform arbitrary code execution.
introduction/views.py
448: if (request.method=="POST"):
449: val=request.POST.get('val')
450:
451: print(val)
452: try:
>>> 453: output = eval(val)
454: except:
455: output = "Something went wrong"
456: return render(request,'Lab/CMD/cmd_lab2.html',{"output":output})
457: print("Output = ", output)
458: return render(request,'Lab/CMD/cmd_lab2.html',{"output":output})
An attacker could execute arbitrary Python code on the server, potentially leading to remote code execution, data theft, system compromise, or complete server takeover by injecting malicious payloads through the 'val' parameter.
ControlFlowNode for request → ControlFlowNode for Attribute → ControlFlowNode for Attribute() → ControlFlowNode for val → ControlFlowNode for valdef cmd_lab2(request): → ControlFlowNode for request
val=request.POST.get('val') → ControlFlowNode for Attribute
val=request.POST.get('val') → ControlFlowNode for Attribute()
val=request.POST.get('val') → ControlFlowNode for val
output = eval(val) → ControlFlowNode for val
Replace the unsafe eval() call with a safe alternative. First, validate that the input contains only expected mathematical expressions or safe characters. Then use a restricted evaluation method like Python's ast.literal_eval() for simple expressions, or implement a whitelist of allowed operations using a math evaluation library. Never execute arbitrary user input as code.
import ast
import re
if (request.method=="POST"):
val = request.POST.get('val', '')
# Only allow basic math expressions with numbers and operators
if re.match(r'^[\d\s+\-*/().]*$', val):
try:
# Use ast.literal_eval for safe evaluation of literals
# For math expressions, consider using a safer alternative like:
# output = eval(val, {"__builtins__": {}}, {})
# Or better yet, use a math evaluation library
# Simple safe alternative for basic arithmetic:
output = str(eval(val, {"__builtins__": {}}, {}))
except:
output = "Invalid expression"
else:
output = "Invalid input - only numbers and basic math operators allowed"
return render(request,'Lab/CMD/cmd_lab2.html',{"output":output})
3af51f6784ccMaking a network request to a URL that is fully user-controlled allows for request forgery attacks.
introduction/views.py
951: return render(request, "Lab/ssrf/ssrf_lab2.html")
952:
953: elif request.method == "POST":
954: url = request.POST["url"]
955: try:
>>> 956: response = requests.get(url)
957: return render(request, "Lab/ssrf/ssrf_lab2.html", {"response": response.content.decode()})
958: except:
959: return render(request, "Lab/ssrf/ssrf_lab2.html", {"error": "Invalid URL"})
960: #--------------------------------------- Server-side template injection --------------------------------------#
961:
An attacker could exploit this to make requests to internal services, potentially accessing sensitive data from databases, cloud metadata services, or internal APIs. They could also pivot to attack other internal systems or perform port scanning of the internal network.
ControlFlowNode for request → ControlFlowNode for url → ControlFlowNode for urldef ssrf_lab2(request): → ControlFlowNode for request
url = request.POST["url"] → ControlFlowNode for url
response = requests.get(url) → ControlFlowNode for url
First, implement an allowlist of permitted domains or URL patterns that the application can access. Second, validate and sanitize the user-provided URL by parsing it with a secure library like urllib.parse and checking against the allowlist. Third, implement network-level restrictions by using a whitelist of allowed IP ranges or disabling access to internal network segments. Finally, consider using a timeout and limiting response size to prevent resource exhaustion attacks.
import requests
from urllib.parse import urlparse
from django.conf import settings
ALLOWED_DOMAINS = ['example.com', 'api.trusted-service.com']
if request.method == "POST":
url = request.POST["url"]
# Parse and validate URL
parsed = urlparse(url)
if not parsed.netloc or parsed.scheme not in ['http', 'https']:
return render(request, "Lab/ssrf/ssrf_lab2.html", {"error": "Invalid URL"})
# Domain allowlist check
if parsed.netloc not in ALLOWED_DOMAINS:
return render(request, "Lab/ssrf/ssrf_lab2.html", {"error": "Domain not permitted"})
# Prevent internal IP access
if parsed.netloc.startswith(('localhost', '127.', '192.168.', '10.', '172.')):
return render(request, "Lab/ssrf/ssrf_lab2.html", {"error": "Internal addresses not allowed"})
try:
response = requests.get(url, timeout=5, allow_redirects=False)
return render(request, "Lab/ssrf/ssrf_lab2.html", {"response": response.content.decode()})
except:
return render(request, "Lab/ssrf/ssrf_lab2.html", {"error": "Invalid URL"})
5eace981fc79Deserializing user-controlled data may allow attackers to execute arbitrary code.
introduction/views.py
208: if token == None:
209: token = encoded_user
210: response.set_cookie(key='token',value=token.decode('utf-8'))
211: else:
212: token = base64.b64decode(token)
>>> 213: admin = pickle.loads(token)
214: if admin.admin == 1:
215: response = render(request,'Lab/insec_des/insec_des_lab.html', {"message":"Welcome Admin, SECRETKEY:ADMIN123"})
216: return response
217:
218: return response
Attackers could execute arbitrary code on the server by crafting malicious pickle payloads, leading to complete system compromise, data theft, or server takeover.
ControlFlowNode for request → ControlFlowNode for Attribute → ControlFlowNode for Attribute() → ControlFlowNode for token → ControlFlowNode for token → ControlFlowNode for tokendef insec_des_lab(request): → ControlFlowNode for request
token = request.COOKIES.get('token') → ControlFlowNode for Attribute
token = request.COOKIES.get('token') → ControlFlowNode for Attribute()
token = request.COOKIES.get('token') → ControlFlowNode for token
token = base64.b64decode(token) → ControlFlowNode for token
admin = pickle.loads(token) → ControlFlowNode for token
Replace pickle deserialization with a secure alternative. First, implement a JSON-based token system using HMAC signatures for integrity verification. Store only necessary user data (like user ID and admin flag) in the token, not Python objects. Validate the token signature before processing any data.
import json
import hmac
import hashlib
from django.conf import settings
# In insec_des_lab function, replace lines 212-213 with:
token_data = json.loads(base64.b64decode(token).decode('utf-8'))
data = token_data['data']
signature = token_data['sig']
expected_sig = hmac.new(settings.SECRET_KEY.encode(), data.encode(), hashlib.sha256).hexdigest()
if not hmac.compare_digest(signature, expected_sig):
return HttpResponse('Invalid token', status=403)
user_info = json.loads(data)
admin_flag = user_info.get('admin', 0)
f3b25cff5d3dDeserializing user-controlled data may allow attackers to execute arbitrary code.
introduction/views.py
548: else:
549:
550: try :
551: file=request.FILES["file"]
552: try :
>>> 553: data = yaml.load(file,yaml.Loader)
554:
555: return render(request,"Lab/A9/a9_lab.html",{"data":data})
556: except:
557: return render(request, "Lab/A9/a9_lab.html", {"data": "Error"})
558:
An attacker could upload a malicious YAML file containing Python constructors that execute arbitrary code on the server, potentially leading to remote code execution, data theft, or complete system compromise.
ControlFlowNode for request → ControlFlowNode for file → ControlFlowNode for filedef a9_lab(request): → ControlFlowNode for request
file=request.FILES["file"] → ControlFlowNode for file
data = yaml.load(file,yaml.Loader) → ControlFlowNode for file
Replace the unsafe yaml.load() with yaml.safe_load() to prevent arbitrary code execution. The yaml.Loader is unsafe because it allows the execution of Python constructors. Use yaml.safe_load() which only loads basic YAML types like strings, lists, and dictionaries without executing arbitrary code. Additionally, validate the uploaded file size and type before processing.
try:
file = request.FILES["file"]
# Optional: Add file size validation
if file.size > MAX_UPLOAD_SIZE:
return render(request, "Lab/A9/a9_lab.html", {"data": "File too large"})
# Use safe_load instead of load with unsafe Loader
data = yaml.safe_load(file)
return render(request, "Lab/A9/a9_lab.html", {"data": data})
except:
return render(request, "Lab/A9/a9_lab.html", {"data": "Error"})
5d362b71e0ccParsing user input as an XML document with external entity expansion is vulnerable to XXE attacks.
introduction/views.py
249: @csrf_exempt
250: def xxe_parse(request):
251:
252: parser = make_parser()
253: parser.setFeature(feature_external_ges, True)
>>> 254: doc = parseString(request.body.decode('utf-8'), parser=parser)
255: for event, node in doc:
256: if event == START_ELEMENT and node.tagName == 'text':
257: doc.expandNode(node)
258: text = node.toxml()
259: startInd = text.find('>')
An attacker could read arbitrary files from the server's filesystem, initiate SSRF attacks to access internal network resources, or cause denial of service through entity expansion attacks (Billion Laughs attack).
ControlFlowNode for request → ControlFlowNode for Attribute()def xxe_parse(request): → ControlFlowNode for request
doc = parseString(request.body.decode('utf-8'), parser=parser) → ControlFlowNode for Attribute()
Disable external entity processing in the XML parser. First, set the external general entities feature to False instead of True. Second, also disable external parameter entities and DTD loading by setting the corresponding features to False. Finally, consider using a secure XML parser like defusedxml that disables these features by default.
249: @csrf_exempt
250: def xxe_parse(request):
251:
252: parser = make_parser()
253: parser.setFeature(feature_external_ges, False)
254: parser.setFeature(feature_external_pes, False)
255: parser.setFeature(features.validation, False)
256: parser.setFeature(features.load_external_dtd, False)
257: doc = parseString(request.body.decode('utf-8'), parser=parser)
258: for event, node in doc:
259: if event == START_ELEMENT and node.tagName == 'text':
260: doc.expandNode(node)
261: text = node.toxml()
262: startInd = text.find('>')
b23ff41c6ceaUsing externally controlled strings in a command line may allow a malicious user to change the meaning of the command.
introduction/views.py
419: command = "dig {}".format(domain)
420:
421: try:
422: # output=subprocess.check_output(command,shell=True,encoding="UTF-8")
423: process = subprocess.Popen(
>>> 424: command,
425: shell=True,
426: stdout=subprocess.PIPE,
427: stderr=subprocess.PIPE)
428: stdout, stderr = process.communicate()
429: data = stdout.decode('utf-8')
An attacker could execute arbitrary shell commands on the server by injecting shell metacharacters (like ;, &&, |, or $(...)) in the domain parameter, potentially leading to remote code execution, data theft, or server compromise.
ControlFlowNode for request → ControlFlowNode for Attribute → ControlFlowNode for Attribute() → ControlFlowNode for domain → ControlFlowNode for domain → ControlFlowNode for command → ControlFlowNode for commanddef cmd_lab(request): → ControlFlowNode for request
domain=request.POST.get('domain') → ControlFlowNode for Attribute
domain=request.POST.get('domain') → ControlFlowNode for Attribute()
domain=request.POST.get('domain') → ControlFlowNode for domain
domain=domain.replace("https://www.",'') → ControlFlowNode for domain
command="nslookup {}".format(domain) → ControlFlowNode for command
command, → ControlFlowNode for command
Remove shell=True and use subprocess.run() with a list of arguments instead of a string. Validate and sanitize the domain input by allowing only alphanumeric characters, hyphens, and dots. Use a whitelist of allowed commands (nslookup or dig) rather than constructing the command from user input.
import re
# Validate domain input
if not re.match(r'^[a-zA-Z0-9.-]+$', domain):
return HttpResponse('Invalid domain', status=400)
# Use subprocess.run with argument list
command = ['nslookup', domain]
try:
result = subprocess.run(command, capture_output=True, text=True, timeout=5)
data = result.stdout
except subprocess.TimeoutExpired:
return HttpResponse('Command timeout', status=500)
1a7314eca79cPlaintext Storage of a Password
dockerized_labs/broken_auth_lab/app.py
dictionary assignment 14: # Vulnerable: Storing user data in memory
15: users = {
16: 'admin': {
17: 'password': 'admin123', # Vulnerable: Weak password
18: 'email': 'admin@example.com',
19: 'role': 'admin'
>>> 20: },
21: 'user': {
22: 'password': 'password123', # Vulnerable: Weak password
23: 'email': 'user@example.com',
24: 'role': 'user'
25: }
26: }
Passwords are stored in plain text. If the application data is compromised (e.g., memory dump, database breach), all user credentials are immediately exposed. Attackers can use these credentials to impersonate users, including administrators.
users[username]['password'] 'password': 'admin123', → users['admin']['password']
if username in users and users[username]['password'] == password: → users[username]['password']
if username in users and users[username]['password'] == password: → users[username]['password']
Use a strong, adaptive hashing algorithm (like bcrypt, scrypt, or Argon2) to hash passwords before storage. Never compare or store plain text passwords.
import bcrypt
# During registration
hashed_password = bcrypt.hashpw(password.encode('utf-8'), bcrypt.gensalt())
# Store hashed_password
# During login
if bcrypt.checkpw(password.encode('utf-8'), stored_hashed_password):
# Login successful
9c381f3841a3Deserializing user-controlled data may allow attackers to execute arbitrary code.
dockerized_labs/insec_des_lab/main.py
31: def deserialize_data():
32: try:
33: serialized_data = request.form.get('serialized_data', '')
34: decoded_data = base64.b64decode(serialized_data)
35: # Intentionally vulnerable deserialization, matching PyGoat
>>> 36: user = pickle.loads(decoded_data)
37:
38: if isinstance(user, User):
39: if user.is_admin:
40: message = f"Welcome Admin {user.username}! Here's the secret admin content: ADMIN_KEY_123"
41: else:
An attacker could execute arbitrary code on the server by crafting malicious pickle payloads, potentially leading to complete system compromise, data theft, or server takeover through remote code execution.
ControlFlowNode for ImportMember → ControlFlowNode for request → ControlFlowNode for request → ControlFlowNode for Attribute → ControlFlowNode for Attribute() → ControlFlowNode for serialized_data → ControlFlowNode for decoded_data → ControlFlowNode for decoded_datafrom flask import Flask, render_template, request, make_response → ControlFlowNode for ImportMember
from flask import Flask, render_template, request, make_response → ControlFlowNode for request
serialized_data = request.form.get('serialized_data', '') → ControlFlowNode for request
serialized_data = request.form.get('serialized_data', '') → ControlFlowNode for Attribute
serialized_data = request.form.get('serialized_data', '') → ControlFlowNode for Attribute()
serialized_data = request.form.get('serialized_data', '') → ControlFlowNode for serialized_data
decoded_data = base64.b64decode(serialized_data) → ControlFlowNode for decoded_data
user = pickle.loads(decoded_data) → ControlFlowNode for decoded_data
Replace pickle deserialization with a secure alternative. First, remove the vulnerable pickle.loads() call entirely. Instead, implement a JSON-based serialization/deserialization scheme using Python's json module. Validate and sanitize all user input before processing, and maintain the User object structure using safe data types.
import json
def deserialize_data():
try:
serialized_data = request.form.get('serialized_data', '')
# Replace pickle with JSON deserialization
user_data = json.loads(serialized_data)
# Create User object from validated data
user = User(
username=str(user_data.get('username', '')),
is_admin=bool(user_data.get('is_admin', False))
)
if user.is_admin:
message = f"Welcome Admin {user.username}! Here's the secret admin content: ADMIN_KEY_123"
else:
message = f"Welcome {user.username}!"
return make_response(message, 200)
except (json.JSONDecodeError, ValueError) as e:
return make_response(f"Invalid data format: {str(e)}", 400)
dde0834d7a08Improper Control of Generation of Code ('Code Injection')
introduction/templates/Lab_2021/A3_Injection/ssti_lab.html
ssti_lab 20: blog = request.POST["blog"]
21: id = str(uuid.uuid4()).split('-')[-1]
22:
23: blog = filter_blog(blog)
24: prepend_code = "{% extends 'introduction/base.html' %}\n{% block content %}{% block title %}\n<title>SSTI-Blogs</title>\n{% endblock %}"
25:
26: blog = prepend_code + blog + "{% endblock %}"
27: new_blog = Blogs.objects.create(author = request.user, blog_id = id)
>>> 28: new_blog.save()
29: dirname = os.path.dirname(__file__)
30: filename = os.path.join(dirname, f"templates/Lab_2021/A3_Injection/Blogs/{id}.html")
31: file = open(filename, "w+")
32: file.write(blog)
33: file.close()
34: return redirect(f'blog/{id}')
35: else:
Attackers can inject Django template syntax to execute arbitrary code on the server. Example payloads can access sensitive data like SECRET_KEY (as shown in example template a2538af1b5e4.html), execute system commands, read files, or achieve remote code execution. The example file 9d73d120683d.html shows access to admin logs and password hashes.
blog<textarea id="ssti_blog" name="blog" placeholder="your blogs goes here" style="background:#f4f4f924;overflow: auto"></textarea> → blog
blog = request.POST["blog"] → blog
file.write(blog) → blog
1. Implement strict input validation using allowlists. 2. Use a secure template rendering approach that doesn't allow user-controlled template syntax. 3. Store blog content in database fields rather than template files. 4. Use a dedicated sanitization function that removes or escapes template syntax.
import re
def sanitize_blog_content(content):
# Remove all Django template syntax
content = re.sub(r'\{[%{].*?[%}]\}', '', content)
# Remove HTML tags except basic formatting
allowed_tags = ['b', 'i', 'p', 'br', 'ul', 'li', 'ol']
# Use bleach or similar library for HTML sanitization
import bleach
return bleach.clean(content, tags=allowed_tags, strip=True)
# Usage:
blog_content = sanitize_blog_content(request.POST["blog"])
# Store in database instead of template file
Blog.objects.create(author=request.user, content=blog_content, blog_id=id)
dbd9425e3da1Using broken or weak cryptographic hashing algorithms can compromise security.
introduction/mitre.py
156: if request.method == 'GET':
157: return render(request, 'mitre/csrf_lab_login.html')
158: elif request.method == 'POST':
159: password = request.POST.get('password')
160: username = request.POST.get('username')
>>> 161: password = md5(password.encode()).hexdigest()
162: User = CSRF_user_tbl.objects.filter(username=username, password=password)
163: if User:
164: payload ={
165: 'username': username,
166: 'exp': datetime.datetime.utcnow() + datetime.timedelta(seconds=300),
An attacker could crack MD5 hashes using rainbow tables or GPU-based brute force attacks to recover plaintext passwords. Since MD5 is fast and unsalted, identical passwords produce identical hashes, enabling credential stuffing attacks across the user database.
ControlFlowNode for Attribute() → ControlFlowNode for password → ControlFlowNode for Attribute()password = request.POST.get('password') → ControlFlowNode for Attribute()
password = request.POST.get('password') → ControlFlowNode for password
password = md5(password.encode()).hexdigest() → ControlFlowNode for Attribute()
Replace MD5 with a secure password hashing algorithm designed for password storage. Use Django's built-in make_password() function which defaults to PBKDF2 with SHA256 and a salt. When verifying passwords, use check_password() instead of comparing raw hashes. Ensure all existing passwords are migrated to the new hashing method.
from django.contrib.auth.hashers import make_password, check_password
# In the POST handler:
if request.method == 'POST':
password = request.POST.get('password')
username = request.POST.get('username')
try:
user = CSRF_user_tbl.objects.get(username=username)
if check_password(password, user.password):
# Authentication successful
# ... rest of the code
except CSRF_user_tbl.DoesNotExist:
# Handle invalid user
pass
# When creating users, use:
hashed_password = make_password(plain_text_password)
ef8ea1c93a29Using broken or weak cryptographic hashing algorithms can compromise security.
introduction/views.py
1014: return render(request,"Lab_2021/A2_Crypto_failur/crypto_failure_lab.html")
1015: elif request.method=="POST":
1016: username = request.POST["username"]
1017: password = request.POST["password"]
1018: try:
>>> 1019: password = md5(password.encode()).hexdigest()
1020: user = CF_user.objects.filter(username=username,password=password).first()
1021: return render(request,"Lab_2021/A2_Crypto_failur/crypto_failure_lab.html",{"user":user, "success":True,"failure":False})
1022: except Exception as e:
1023: return render(request,"Lab_2021/A2_Crypto_failur/crypto_failure_lab.html",{"success":False, "failure":True})
1024: else :
Attackers could crack MD5 hashes using rainbow tables or GPU-based brute force attacks to recover plaintext passwords, especially since MD5 lacks salt and is computationally cheap. This could lead to account takeover and credential reuse attacks across other services.
ControlFlowNode for Subscript → ControlFlowNode for password → ControlFlowNode for Attribute()password = request.POST["password"] → ControlFlowNode for Subscript
password = request.POST["password"] → ControlFlowNode for password
password = md5(password.encode()).hexdigest() → ControlFlowNode for Attribute()
Replace MD5 with a secure password hashing algorithm designed for password storage. Use Django's built-in make_password() function which defaults to PBKDF2 with SHA256 and a per-user salt. Import the function from django.contrib.auth.hashers and apply it to the password before storing or comparing. Ensure the same algorithm is used consistently across registration and login.
from django.contrib.auth.hashers import make_password, check_password
# In the POST handler:
username = request.POST["username"]
password = request.POST["password"]
try:
# For registration: hashed_password = make_password(password)
# For login verification:
user = CF_user.objects.filter(username=username).first()
if user and check_password(password, user.password):
return render(request, "Lab_2021/A2_Crypto_failur/crypto_failure_lab.html", {"user": user, "success": True, "failure": False})
else:
return render(request, "Lab_2021/A2_Crypto_failur/crypto_failure_lab.html", {"success": False, "failure": True})
except Exception as e:
return render(request, "Lab_2021/A2_Crypto_failur/crypto_failure_lab.html", {"success": False, "failure": True})
894eafa401c4Using broken or weak cryptographic hashing algorithms can compromise security.
introduction/views.py
1182: elif request.method == "POST":
1183: token = str(uuid.uuid4())
1184: try:
1185: username = request.POST["username"]
1186: password = request.POST["password"]
>>> 1187: password = hashlib.sha256(password.encode()).hexdigest()
1188: except:
1189: response = render(request, "Lab_2021/A7_auth_failure/lab3.html")
1190: response.set_cookie("session_id", None)
1191: return response
1192:
An attacker could use rainbow tables or GPU-based attacks to crack SHA-256 hashes, potentially compromising user accounts. Since SHA-256 is fast and unsalted, identical passwords will have identical hashes, enabling password correlation attacks.
ControlFlowNode for Subscript → ControlFlowNode for password → ControlFlowNode for Attribute()password = request.POST["password"] → ControlFlowNode for Subscript
password = request.POST["password"] → ControlFlowNode for password
password = hashlib.sha256(password.encode()).hexdigest() → ControlFlowNode for Attribute()
Replace SHA-256 with a password hashing algorithm designed for password storage, such as Argon2, bcrypt, or PBKDF2. Use Django's built-in make_password() function which defaults to PBKDF2 with a salt and multiple iterations. Store only the hashed password in the database, never the plaintext or SHA-256 hash.
from django.contrib.auth.hashers import make_password, check_password # Replace line 1187 with: password = make_password(password) # Later when verifying passwords: # if check_password(provided_password, stored_hash):
e9958f2b1e57Building a SQL query from user-controlled sources is vulnerable to insertion of malicious SQL code by the user.
introduction/views.py
156:
157: sql_query = "SELECT * FROM introduction_login WHERE user='"+name+"'AND password='"+password+"'"
158: print(sql_query)
159: try:
160: print("\nin try\n")
>>> 161: val=login.objects.raw(sql_query)
162: except:
163: print("\nin except\n")
164: return render(
165: request,
166: 'Lab/SQL/sql_lab.html',
An attacker could execute arbitrary SQL commands, allowing them to bypass authentication, extract sensitive data from the database, modify or delete data, or potentially execute system commands depending on database configuration.
ControlFlowNode for request → ControlFlowNode for Attribute → ControlFlowNode for Attribute() → ControlFlowNode for name → ControlFlowNode for sql_query → ControlFlowNode for sql_querydef sql_lab(request): → ControlFlowNode for request
name=request.POST.get('name') → ControlFlowNode for Attribute
name=request.POST.get('name') → ControlFlowNode for Attribute()
name=request.POST.get('name') → ControlFlowNode for name
sql_query = "SELECT * FROM introduction_login WHERE user='"+name+"'AND password='"+password+"'" → ControlFlowNode for sql_query
val=login.objects.raw(sql_query) → ControlFlowNode for sql_query
Replace the raw SQL query with Django's ORM query methods using parameterized queries. First, use Django's filter() method with field lookups instead of string concatenation. Second, ensure user input is properly validated and sanitized by the ORM, which will handle SQL injection protection automatically.
val = login.objects.filter(user=name, password=password).first() # Or for authentication purposes, better to use: # from django.contrib.auth import authenticate # user = authenticate(request, username=name, password=password)
66e60191b10dBuilding a SQL query from user-controlled sources is vulnerable to insertion of malicious SQL code by the user.
introduction/views.py
866: sql_instance.save()
867:
868: print(sql_query)
869:
870: try:
>>> 871: user = sql_lab_table.objects.raw(sql_query)
872: user = user[0].id
873: print(user)
874:
875: except:
876: return render(
An attacker could execute arbitrary SQL commands on the database, potentially reading, modifying, or deleting sensitive data, bypassing authentication, or gaining administrative access to the database server.
ControlFlowNode for request → ControlFlowNode for Attribute → ControlFlowNode for Attribute() → ControlFlowNode for name → ControlFlowNode for sql_query → ControlFlowNode for sql_querydef injection_sql_lab(request): → ControlFlowNode for request
name=request.POST.get('name') → ControlFlowNode for Attribute
name=request.POST.get('name') → ControlFlowNode for Attribute()
name=request.POST.get('name') → ControlFlowNode for name
sql_query = "SELECT * FROM introduction_sql_lab_table WHERE id='"+name+"'AND password='"+password+"'" → ControlFlowNode for sql_query
user = sql_lab_table.objects.raw(sql_query) → ControlFlowNode for sql_query
Replace the raw SQL query with Django's ORM query methods using parameterized queries. Instead of building the SQL string with user input, use the ORM's filter() method with proper field lookups. If raw SQL is absolutely necessary, use Django's parameterized raw() method with query parameters.
# Replace lines 857-871 with:
sql_query = "SELECT * FROM introduction_sql_lab_table WHERE name = %s"
user = sql_lab_table.objects.raw(sql_query, [name])
# OR better yet, use Django ORM directly:
user = sql_lab_table.objects.filter(name=name).first()
if user:
user_id = user.id
e8f65463a50bParsing user input as an XML document with arbitrary internal entity expansion is vulnerable to denial-of-service attacks.
introduction/views.py
249: @csrf_exempt
250: def xxe_parse(request):
251:
252: parser = make_parser()
253: parser.setFeature(feature_external_ges, True)
>>> 254: doc = parseString(request.body.decode('utf-8'), parser=parser)
255: for event, node in doc:
256: if event == START_ELEMENT and node.tagName == 'text':
257: doc.expandNode(node)
258: text = node.toxml()
259: startInd = text.find('>')
An attacker could exploit this vulnerability to perform XML External Entity (XXE) attacks, potentially reading sensitive files from the server filesystem, causing denial of service through entity expansion attacks (billion laughs attack), or making internal network requests to exfiltrate data.
ControlFlowNode for request → ControlFlowNode for Attribute()def xxe_parse(request): → ControlFlowNode for request
doc = parseString(request.body.decode('utf-8'), parser=parser) → ControlFlowNode for Attribute()
Disable XML external entity processing entirely by setting both `feature_external_ges` and `feature_external_pes` to False on the parser before parsing. Additionally, consider using the `defusedxml` library which provides secure XML parsing by default. Replace the vulnerable `xml.sax.make_parser()` with `defusedxml.sax.make_parser()` to prevent entity expansion attacks.
import defusedxml.sax
from defusedxml.common import DefusedXmlException
@csrf_exempt
def xxe_parse(request):
try:
parser = defusedxml.sax.make_parser()
parser.setFeature(feature_external_ges, False)
parser.setFeature(feature_external_pes, False)
doc = parseString(request.body.decode('utf-8'), parser=parser)
for event, node in doc:
if event == START_ELEMENT and node.tagName == 'text':
doc.expandNode(node)
text = node.toxml()
startInd = text.find('>')
except DefusedXmlException:
return HttpResponseBadRequest('Invalid XML')
b5476b51e68aLogging sensitive information without encryption or hashing can expose it to an attacker.
introduction/views.py
153: if name:
154:
155: if login.objects.filter(user=name):
156:
157: sql_query = "SELECT * FROM introduction_login WHERE user='"+name+"'AND password='"+password+"'"
>>> 158: print(sql_query)
159: try:
160: print("\nin try\n")
161: val=login.objects.raw(sql_query)
162: except:
163: print("\nin except\n")
An attacker with access to logs could extract plaintext credentials, leading to account compromise. Additionally, the raw SQL construction creates SQL injection vulnerabilities that could allow database manipulation or data exfiltration.
ControlFlowNode for Attribute() → ControlFlowNode for password → ControlFlowNode for sql_query → ControlFlowNode for sql_querypassword=request.POST.get('pass') → ControlFlowNode for Attribute()
password=request.POST.get('pass') → ControlFlowNode for password
sql_query = "SELECT * FROM introduction_login WHERE user='"+name+"'AND password='"+password+"'" → ControlFlowNode for sql_query
print(sql_query) → ControlFlowNode for sql_query
First, remove the print statement that logs the SQL query containing plaintext credentials. Second, replace the raw SQL query with Django's ORM query methods to prevent SQL injection. Third, use Django's built-in authentication system instead of manual password checking. Finally, ensure no sensitive data is logged in production by configuring appropriate logging levels.
from django.contrib.auth import authenticate
if name:
if login.objects.filter(user=name):
# Use Django's authentication system instead of raw SQL
user = authenticate(username=name, password=password)
if user is not None:
# Authentication successful
# Remove all print statements with sensitive data
val = user
else:
# Authentication failed
val = None
1e8b612918f8Logging sensitive information without encryption or hashing can expose it to an attacker.
introduction/views.py
303: return render(request,'Lab/AUTH/auth_lab_login.html')
304: elif request.method == 'POST':
305: try:
306: user_name = request.POST['username']
307: passwd = request.POST['pass']
>>> 308: print(user_name,passwd)
309: obj = authLogin.objects.filter(username=user_name,password=passwd)[0]
310: try:
311: rendered = render_to_string('Lab/AUTH/auth_success.html', {'username': obj.username,'userid':obj.userid,'name':obj.name, 'err_msg':'Login Successful'})
312: response = HttpResponse(rendered)
313: response.set_cookie('userid', obj.userid, max_age=31449600, samesite=None, secure=False)
An attacker with access to application logs could steal user credentials, leading to account compromise and potential lateral movement within the system. This also violates privacy regulations and exposes the system to credential stuffing attacks.
ControlFlowNode for Subscript → ControlFlowNode for passwd → ControlFlowNode for passwdpasswd = request.POST['pass'] → ControlFlowNode for Subscript
passwd = request.POST['pass'] → ControlFlowNode for passwd
print(user_name,passwd) → ControlFlowNode for passwd
Remove the print statement that logs credentials in plain text. If debugging is necessary, replace it with logging that redacts sensitive information or use a secure logging framework. Ensure no other logging statements in the codebase expose sensitive data like passwords, tokens, or personal information.
303: return render(request,'Lab/AUTH/auth_lab_login.html')
304: elif request.method == 'POST':
305: try:
306: user_name = request.POST['username']
307: passwd = request.POST['pass']
308: # Removed insecure credential logging
309: obj = authLogin.objects.filter(username=user_name,password=passwd)[0]
1cd13b9fef9fLogging sensitive information without encryption or hashing can expose it to an attacker.
introduction/views.py
743: else:
744: return redirect('login')
745:
746: name = request.POST.get('name')
747: password = request.POST.get('pass')
>>> 748: print(password)
749: print(name)
750: if name:
751: if request.COOKIES.get('admin') == "1":
752: return render(
753: request,
Attackers with access to server logs or console output could steal user credentials, leading to account compromise and potential privilege escalation. This could also violate data protection regulations like GDPR or CCPA.
ControlFlowNode for Attribute() → ControlFlowNode for password → ControlFlowNode for passwordpassword = request.POST.get('pass') → ControlFlowNode for Attribute()
password = request.POST.get('pass') → ControlFlowNode for password
print(password) → ControlFlowNode for password
Remove the print statements that log sensitive credentials. Instead of printing passwords to the console, implement proper logging without sensitive data. If debugging is needed, use a secure logging framework that redacts sensitive information and ensure logs are stored securely with access controls.
743: else:
744: return redirect('login')
745:
746: name = request.POST.get('name')
747: password = request.POST.get('pass')
748: # Removed insecure print statements
749: if name:
750: if request.COOKIES.get('admin') == "1":
751: return render(
752: request,
40e0d63c22b8Logging sensitive information without encryption or hashing can expose it to an attacker.
introduction/views.py
849: if request.user.is_authenticated:
850:
851: name=request.POST.get('name')
852: password=request.POST.get('pass')
853: print(name)
>>> 854: print(password)
855:
856: if name:
857: sql_query = "SELECT * FROM introduction_sql_lab_table WHERE id='"+name+"'AND password='"+password+"'"
858:
859: sql_instance = sql_lab_table(id="admin", password="65079b006e85a7e798abecb99e47c154")
An attacker with access to server logs could steal user credentials, leading to account compromise and potential lateral movement within the system. The clear-text logging also violates privacy regulations and exposes authentication secrets.
ControlFlowNode for Attribute() → ControlFlowNode for password → ControlFlowNode for passwordpassword=request.POST.get('pass') → ControlFlowNode for Attribute()
password=request.POST.get('pass') → ControlFlowNode for password
print(password) → ControlFlowNode for password
Remove the print statements that log sensitive credentials. Instead of printing passwords to the console, use Django's logging framework with appropriate log levels and ensure passwords are never logged. Additionally, fix the SQL injection vulnerability by using parameterized queries instead of string concatenation.
if request.user.is_authenticated:
name=request.POST.get('name')
password=request.POST.get('pass')
# REMOVED: print(name)
# REMOVED: print(password)
if name:
sql_query = "SELECT * FROM introduction_sql_lab_table WHERE id=%s AND password=%s"
# Execute with parameterized query using Django's ORM or cursor.execute(sql_query, [name, password])
sql_instance = sql_lab_table(id="admin", password="65079b006e85a7e798abecb99e47c154")
627fc939a6f1Logging sensitive information without encryption or hashing can expose it to an attacker.
introduction/views.py
863: sql_instance = sql_lab_table(id="slinky", password="b4f945433ea4c369c12741f62a23ccc0")
864: sql_instance.save()
865: sql_instance = sql_lab_table(id="bloke", password="f8d1ce191319ea8f4d1d26e65e130dd5")
866: sql_instance.save()
867:
>>> 868: print(sql_query)
869:
870: try:
871: user = sql_lab_table.objects.raw(sql_query)
872: user = user[0].id
873: print(user)
An attacker with access to application logs could extract password hashes from the logged SQL queries, enabling offline brute-force attacks or credential stuffing. If the query contains plaintext passwords, the attacker could directly compromise user accounts.
ControlFlowNode for Attribute() → ControlFlowNode for password → ControlFlowNode for sql_query → ControlFlowNode for sql_querypassword=request.POST.get('pass') → ControlFlowNode for Attribute()
password=request.POST.get('pass') → ControlFlowNode for password
sql_query = "SELECT * FROM introduction_sql_lab_table WHERE id='"+name+"'AND password='"+password+"'" → ControlFlowNode for sql_query
print(sql_query) → ControlFlowNode for sql_query
Remove the print statement that logs the SQL query containing sensitive password data. Instead of logging the raw SQL query, log a sanitized version that excludes sensitive parameters or use Django's logging framework with appropriate log levels. For debugging purposes, consider using Django's debug logging only in development environments with proper filtering.
863: sql_instance = sql_lab_table(id="slinky", password="b4f945433ea4c369c12741f62a23ccc0")
864: sql_instance.save()
865: sql_instance = sql_lab_table(id="bloke", password="f8d1ce191319ea8f4d1d26e65e130dd5")
866: sql_instance.save()
867:
868: # Removed sensitive logging: print(sql_query)
869:
870: try:
871: user = sql_lab_table.objects.raw(sql_query)
872: user = user[0].id
873: # Removed sensitive logging: print(user)
95f43cab690aAccessing paths influenced by users can allow an attacker to access unexpected resources.
introduction/views.py
915: else:
916: file=request.POST["blog"]
917: try :
918: dirname = os.path.dirname(__file__)
919: filename = os.path.join(dirname, file)
>>> 920: file = open(filename,"r")
921: data = file.read()
922: return render(request,"Lab/ssrf/ssrf_lab.html",{"blog":data})
923: except:
924: return render(request, "Lab/ssrf/ssrf_lab.html", {"blog": "No blog found"})
925: else:
An attacker could perform path traversal attacks to read arbitrary files on the server (e.g., /etc/passwd, configuration files, source code) or potentially write files if the vulnerability existed in a write operation context.
ControlFlowNode for request → ControlFlowNode for file → ControlFlowNode for filename → ControlFlowNode for filenamedef ssrf_lab(request): → ControlFlowNode for request
file=request.POST["blog"] → ControlFlowNode for file
filename = os.path.join(dirname, file) → ControlFlowNode for filename
file = open(filename,"r") → ControlFlowNode for filename
First, validate and sanitize the user input to ensure it contains only allowed characters and doesn't contain path traversal sequences. Second, restrict file access to a specific safe directory using os.path.abspath() and ensure the resolved path stays within that directory. Third, use a whitelist approach if possible, mapping user input to known safe files. Finally, implement proper error handling that doesn't leak sensitive information.
import os
from django.conf import settings
# ... inside ssrf_lab function ...
else:
user_file = request.POST["blog"]
try:
# Sanitize input and restrict to safe directory
safe_dir = os.path.join(settings.BASE_DIR, "safe_blog_files")
# Normalize path and prevent directory traversal
requested_path = os.path.normpath(user_file).lstrip('/')
if '..' in requested_path or requested_path.startswith('/'):
raise ValueError("Invalid file path")
# Construct full path and verify it's within safe directory
full_path = os.path.join(safe_dir, requested_path)
full_path = os.path.abspath(full_path)
if not full_path.startswith(safe_dir):
raise ValueError("Path traversal attempt detected")
# Open file with explicit encoding
with open(full_path, "r", encoding="utf-8") as f:
data = f.read()
return render(request, "Lab/ssrf/ssrf_lab.html", {"blog": data})
except (ValueError, OSError):
return render(request, "Lab/ssrf/ssrf_lab.html", {"blog": "No blog found"})
0dade280d8adBuilding log entries from user-controlled data is vulnerable to insertion of forged log entries by a malicious user.
introduction/views.py
642:
643: if x_forwarded_for:
644: ip = x_forwarded_for.split(',')[0]
645: else:
646: ip = request.META.get('REMOTE_ADDR')
>>> 647: logging.info(f"{now}:{ip}")
648: return render (request,"Lab/A10/a10_lab2.html")
649: else:
650: user=request.POST.get("name")
651: password=request.POST.get("pass")
652: x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR')
An attacker could inject fake log entries by including newline characters in the X-Forwarded-For header, allowing them to forge log events, obfuscate attack traces, or corrupt log files for analysis.
ControlFlowNode for request → ControlFlowNode for Attribute → ControlFlowNode for Attribute() → ControlFlowNode for ip → ControlFlowNode for Fstringdef a10_lab2(request): → ControlFlowNode for request
ip = request.META.get('REMOTE_ADDR') → ControlFlowNode for Attribute
ip = request.META.get('REMOTE_ADDR') → ControlFlowNode for Attribute()
ip = request.META.get('REMOTE_ADDR') → ControlFlowNode for ip
logging.info(f"{now}:{ip}") → ControlFlowNode for Fstring
Sanitize the IP address before logging by removing or escaping newline characters. First, create a helper function that strips or replaces newline characters (\n, \r) from the IP string. Then, apply this sanitization to the IP variable before using it in the log message. This prevents log injection attacks while preserving the original IP information.
import re
def sanitize_for_logging(value):
"""Remove newline characters to prevent log injection"""
if value:
return re.sub(r'[\r\n]', '', value)
return value
# In the a10_lab2 function:
if x_forwarded_for:
ip = x_forwarded_for.split(',')[0]
else:
ip = request.META.get('REMOTE_ADDR')
sanitized_ip = sanitize_for_logging(ip)
logging.info(f"{now}:{sanitized_ip}")
7f2e99762a04Building log entries from user-controlled data is vulnerable to insertion of forged log entries by a malicious user.
introduction/views.py
656: else:
657: ip = request.META.get('REMOTE_ADDR')
658:
659: if login.objects.filter(user=user,password=password):
660: if ip != '127.0.0.1':
>>> 661: logging.warning(f"{now}:{ip}:{user}")
662: logging.info(f"{now}:{ip}:{user}")
663: return render(request,"Lab/A10/a10_lab2.html",{"name":user})
664: else:
665: logging.error(f"{now}:{ip}:{user}")
666: return render(request, "Lab/A10/a10_lab2.html", {"error": " Wrong username or Password"})
An attacker could inject newline characters into the username field to forge log entries, manipulate log files to hide malicious activity, or corrupt log formats causing parsing failures in monitoring systems.
ControlFlowNode for request → ControlFlowNode for Attribute → ControlFlowNode for Attribute() → ControlFlowNode for user → ControlFlowNode for Fstringdef a10_lab2(request): → ControlFlowNode for request
user=request.POST.get("name") → ControlFlowNode for Attribute
user=request.POST.get("name") → ControlFlowNode for Attribute()
user=request.POST.get("name") → ControlFlowNode for user
logging.warning(f"{now}:{ip}:{user}") → ControlFlowNode for Fstring
First, sanitize the user input by removing or escaping newline characters before logging. Use a logging formatter that properly escapes special characters, or create a sanitization function that replaces newlines and other control characters. Then modify the logging statements to use the sanitized values instead of raw user input.
import re
def sanitize_log_input(value):
"""Remove newlines and carriage returns to prevent log injection"""
if value:
return re.sub(r'[\r\n]', '_', str(value))
return value
# In the a10_lab2 function:
user_input = request.POST.get("name")
sanitized_user = sanitize_log_input(user_input)
# Then use sanitized_user in logging:
logging.warning(f"{now}:{ip}:{sanitized_user}")
logging.info(f"{now}:{ip}:{sanitized_user}")
logging.error(f"{now}:{ip}:{sanitized_user}")
9f0c95d4b12fintroduction/views.py
915: else:
916: file=request.POST["blog"]
917: try :
918: dirname = os.path.dirname(__file__)
919: filename = os.path.join(dirname, file)
>>> 920: file = open(filename,"r")
921: data = file.read()
922: return render(request,"Lab/ssrf/ssrf_lab.html",{"blog":data})
923: except:
924: return render(request, "Lab/ssrf/ssrf_lab.html", {"blog": "No blog found"})
925: else:
Path traversal via user-controlled filename in open() - duplicate of finding 17 from different scanner.
open(filename
0e5f19ff571bUsing broken or weak cryptographic hashing algorithms can compromise security.
introduction/utility.py
54:
55: def filter_blog(code):
56: return code
57:
58: def customHash(password):
>>> 59: return hashlib.sha256(password.encode()).hexdigest()[::-1]
An attacker could perform efficient brute-force or dictionary attacks against the reversed SHA-256 hashes, potentially recovering plaintext passwords that could be used to compromise user accounts.
ControlFlowNode for password → ControlFlowNode for Attribute()def customHash(password): → ControlFlowNode for password
return hashlib.sha256(password.encode()).hexdigest()[::-1] → ControlFlowNode for Attribute()
Replace SHA-256 with a password hashing algorithm designed for password storage, such as bcrypt, scrypt, or Argon2. Use a cryptographically secure salt and appropriate work factors. Remove the string reversal operation as it provides no security benefit and may interfere with proper hash verification.
import bcrypt
def customHash(password):
# Generate salt and hash with appropriate cost factor
salt = bcrypt.gensalt(rounds=12)
hashed_password = bcrypt.hashpw(password.encode(), salt)
return hashed_password.decode()
0c8e2be13933Running a Flask app in debug mode may allow an attacker to run arbitrary code through the Werkzeug debugger.
dockerized_labs/broken_auth_lab/app.py
118: pass
119:
120: return redirect(url_for('lab'))
121:
122: if __name__ == '__main__':
>>> 123: app.run(host='0.0.0.0', port=5000, debug=True) # Vulnerable: Debug mode enabled in production
An attacker could exploit debug mode to execute arbitrary code via the interactive debugger console, access sensitive debugging information, and view detailed error traces that reveal internal application structure and potential attack vectors.
app.run(host='0.0.0.0', port=5000, debug=True) # Vulnerable: Debug mode enabled in production
Remove the debug=True parameter from app.run() in production. Instead, control debug mode through an environment variable that defaults to False. This prevents debug mode from being accidentally enabled in production deployments.
if __name__ == '__main__':
debug_mode = os.environ.get('FLASK_DEBUG', 'False').lower() == 'true'
app.run(host='0.0.0.0', port=5000, debug=debug_mode)
daa14f105370Initializing SECRET_KEY of Flask application with Constant value files can lead to Authentication bypass
dockerized_labs/broken_auth_lab/app.py
3: import json
4: from datetime import datetime, timedelta
5: import base64
6:
7: app = Flask(__name__)
>>> 8: app.secret_key = 'your-secret-key-here' # Vulnerable: Hardcoded secret key
9:
10: # Vulnerable: Storing user data in memory
11: users = {
12: 'admin': {
13: 'password': 'admin123', # Vulnerable: Weak password
An attacker could forge session cookies, perform session fixation attacks, or decrypt sensitive session data, potentially gaining unauthorized access to user accounts and application functionality.
app.secret_key = 'your-secret-key-here' # Vulnerable: Hardcoded secret key
Generate a cryptographically secure random secret key at application startup instead of using a hardcoded value. For production deployments, load the secret key from an environment variable or a secure secrets management system. This ensures each deployment has a unique key and prevents key exposure in source code.
import os
import secrets
app = Flask(__name__)
# Generate secure random key if not provided via environment
app.secret_key = os.environ.get('FLASK_SECRET_KEY') or secrets.token_hex(32)
85ab5d44f7e0Use of Hard-coded Credentials
dockerized_labs/broken_auth_lab/app.py
Flask.__init__ 7: from flask import Flask, render_template, request, redirect, url_for, make_response, flash
8: import hashlib
>>> 9: import json
10: from datetime import datetime, timedelta
11: import base64
Attackers can forge session cookies, bypass authentication, and potentially execute arbitrary code if the secret key is compromised. This allows session hijacking and privilege escalation.
app.secret_keyapp.secret_key = 'your-secret-key-here' → app.secret_key
app.secret_key = 'your-secret-key-here' → app.secret_key
Store the secret key in an environment variable or a secure configuration management system. Never hardcode secrets in source code.
import os
app.secret_key = os.environ.get('FLASK_SECRET_KEY')
5a3b1bb0894eSession Fixation
dockerized_labs/broken_auth_lab/app.py
base64.b64encode 42: if username in users and users[username]['password'] == password: # Vulnerable: Plain text password comparison
43: response = make_response(redirect(url_for('dashboard')))
44:
>>> 45: # Vulnerable: Insecure session management
46: session_token = base64.b64encode(f"{username}:{datetime.now()}".encode()).decode()
47:
48: if remember_me:
49: # Vulnerable: Insecure "Remember Me" implementation
50: response.set_cookie('session', session_token, max_age=30*24*60*60)
51: else:
52: response.set_cookie('session', session_token)
Session tokens are predictable (username + timestamp encoded in base64). Attackers can forge session tokens for any user by guessing the username and approximate time of login. This leads to session hijacking and authentication bypass.
session_token session_token = base64.b64encode(f"{username}:{datetime.now()}".encode()).decode() → session_token
response.set_cookie('session', session_token, max_age=30*24*60*60) → session_token
response.set_cookie('session', session_token, max_age=30*24*60*60) → session_token
Use cryptographically secure random number generators to generate session IDs. Flask-Session or Flask-Login extensions handle secure session management properly.
import secrets session_token = secrets.token_urlsafe(32) # Use Flask-Login or Flask-Session for proper session handling
09e290ee8d3eUse of Weak Hash
dockerized_labs/broken_auth_lab/app.py
hashlib.md5 75: for username, user_data in users.items():
76: if user_data['email'] == email:
77: # Vulnerable: Predictable token generation
>>> 78: token = hashlib.md5(f"{email}:{datetime.now()}".encode()).hexdigest()
79: password_reset_tokens[token] = username
80:
81: # In a real application, this would send an email
82: # Vulnerable: Token exposed in response
MD5 is cryptographically broken and unsuitable for security purposes. Predictable input (email + timestamp) makes tokens easily guessable. Attackers can reset passwords for any user by brute-forcing tokens.
token token = hashlib.md5(f"{email}:{datetime.now()}".encode()).hexdigest() → token
token = hashlib.md5(f"{email}:{datetime.now()}".encode()).hexdigest() → token
Use cryptographically secure random tokens generated by secrets module or UUID. Ensure tokens have sufficient entropy and are single-use with expiration.
import secrets token = secrets.token_urlsafe(32) # Store token with expiration timestamp
8f595c306124Sensitive information stored without encryption or hashing can expose it to an attacker.
introduction/playground/A9/archive.py
44: self.request = request
45:
46: def info(self,msg):
47: now = datetime.datetime.now()
48: f = open('test.log', 'a')
>>> 49: f.write(f"INFO:{now}:{msg}\n")
50: f.close()
51:
52: def warning(self,msg):
53: now = datetime.datetime.now()
54: f = open('test.log', 'a')
An attacker with access to the log file could extract user credentials, leading to account compromise and potential lateral movement within the system. This could also violate data protection regulations if personal information is exposed.
ControlFlowNode for Subscript → ControlFlowNode for password → ControlFlowNode for Fstring → ControlFlowNode for msg → ControlFlowNode for Fstringpassword = request.POST['password'] → ControlFlowNode for Subscript
password = request.POST['password'] → ControlFlowNode for password
L.info(f"POST request with username {username} and password {password}") → ControlFlowNode for Fstring
def info(self,msg): → ControlFlowNode for msg
f.write(f"INFO:{now}:{msg}\n") → ControlFlowNode for Fstring
First, remove the password from the log message at line 16 by logging only the username. Second, ensure no sensitive data is ever written to log files by implementing a data sanitization function that strips passwords or other credentials before logging. Third, consider encrypting the log file if it must contain any sensitive information, though avoiding storage is preferable.
Line 16 should be changed from: L.info(f"POST request with username {username} and password {password}")
To: L.info(f"POST request with username {username}")
Additionally, add input validation to ensure passwords are never logged:
def sanitize_for_logging(data):
sensitive_fields = ['password', 'passwd', 'pwd', 'secret', 'token']
for field in sensitive_fields:
if field in data.lower():
return '[REDACTED]'
return data
35771356b4f5Initializing SECRET_KEY of Flask application with Constant value files can lead to Authentication bypass
pygoat/settings.py
20:
21: # Quick-start development settings - unsuitable for production
22: # See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
23:
24: # SECURITY WARNING: keep the secret key used in production secret!
>>> 25: SECRET_KEY = 'lr66%-a!$km5ed@n5ug!tya5bv!0(yqwa1tn!q%0%3m2nh%oml'
26:
27: SENSITIVE_DATA = 'FLAGTHATNEEDSTOBEFOUND'
28:
29: # SECURITY WARNING: don't run with debug turned on in production!
30: DEBUG = True
An attacker could forge session cookies, perform cross-site request forgery (CSRF) attacks, or escalate privileges by signing malicious data. With the hardcoded key, they could also decrypt sensitive application data if obtained from source control.
SECRET_KEY = 'lr66%-a!$km5ed@n5ug!tya5bv!0(yqwa1tn!q%0%3m2nh%oml'
Generate a cryptographically secure random secret key at runtime instead of hardcoding it. For Django, use `django.core.management.utils.get_random_secret_key()` to generate a secure key. Store this key in an environment variable (e.g., `DJANGO_SECRET_KEY`) and load it via `os.environ.get()`. Never commit secret keys to version control.
import os
from django.core.management.utils import get_random_secret_key
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY', get_random_secret_key())
# Alternative for production-only: SECRET_KEY = os.environ['DJANGO_SECRET_KEY']
ba8fb5a9978eInitializing SECRET_KEY of Flask application with Constant value files can lead to Authentication bypass
dockerized_labs/sensitive_data_exposure/sensitive_data_lab/settings.py
3:
4: # Build paths inside the project like this: BASE_DIR / 'subdir'.
5: BASE_DIR = Path(__file__).resolve().parent.parent
6:
7: # SECURITY WARNING: keep the secret key used in production secret!
>>> 8: SECRET_KEY = 'django-insecure-key-for-demonstration-only'
9:
10: # SECURITY WARNING: don't run with debug turned on in production!
11: DEBUG = True
12:
13: ALLOWED_HOSTS = ['*']
An attacker could forge session cookies, perform cross-site request forgery (CSRF) attacks, or decrypt sensitive data stored by the application. With the secret key, they could impersonate users or escalate privileges.
SECRET_KEY = 'django-insecure-key-for-demonstration-only'
Generate a cryptographically secure random secret key for production use. Store it in an environment variable and load it via os.getenv() with a fallback for development. Never commit hardcoded secrets to version control. Use different keys for different environments.
import os
from pathlib import Path
BASE_DIR = Path(__file__).resolve().parent.parent
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.getenv('DJANGO_SECRET_KEY', 'django-insecure-dev-key-only-for-local')
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = os.getenv('DJANGO_DEBUG', 'False') == 'True'
ALLOWED_HOSTS = os.getenv('DJANGO_ALLOWED_HOSTS', '*').split(',')
55996729efe8Sensitive information stored without encryption or hashing can expose it to an attacker.
dockerized_labs/sensitive_data_exposure/templates/profile.html
217: var userData = {
218: username: "{{ user.username }}",
219: apiKey: "{{ user_data.api_key }}" // Yeah it is necessary for this lab.
220: };
221:
>>> 222: localStorage.setItem('user_api_key', "{{ user_data.api_key }}");
223:
224: console.log("Sensitive data exposed in console - check browser dev tools!");
225:
226: // more bad practices
227: // function checkAdminStatus() {
An attacker with access to the user's browser (via XSS, malware, or physical access) can extract the API key from localStorage and impersonate the user, potentially gaining unauthorized access to backend services and performing actions on behalf of the legitimate user.
user_data.api_key → {{ user_data.api_key }} → "{{ use ... key }}"localStorage.setItem('user_api_key', "{{ user_data.api_key }}"); → user_data.api_key
localStorage.setItem('user_api_key', "{{ user_data.api_key }}"); → {{ user_data.api_key }}
localStorage.setItem('user_api_key', "{{ user_data.api_key }}"); → "{{ use ... key }}"
Remove the localStorage.setItem call that stores the API key in plain text. Instead, keep the API key server-side and implement a secure authentication mechanism. For client-side API calls, use server-generated session tokens with limited scope and lifetime, or implement a secure proxy endpoint that handles API requests without exposing the raw key to the client.
// Remove the vulnerable line entirely
// localStorage.setItem('user_api_key', "{{ user_data.api_key }}");
// Alternative: If API key must be used client-side, store it in an HTTP-only, secure cookie
// and retrieve it via JavaScript only when needed for authenticated requests
// (though this is still less secure than server-side handling)
5c09b9200bccReinterpreting text from the DOM as HTML can lead to a cross-site scripting vulnerability.
introduction/templates/mitre/csrf_dashboard.html
21: function handleSubmit(){
22: var recipent = document.getElementById('input1').value
23: var amount = document.getElementById('input2').value
24: var url = "/mitre/9/lab/api/"+recipent+"/"+amount
25: console.log(url)
>>> 26: window.location.href = url
27: }
28: </script>
29: </div>
30:
31: {% endblock content %}
An attacker could craft malicious input containing JavaScript URLs (javascript:), data URLs (data:), or other schemes to execute arbitrary code in the victim's browser context, potentially leading to session hijacking, phishing, or client-side attacks.
documen ... ).value → recipent → recipent → url → urlvar recipent = document.getElementById('input1').value → documen ... ).value
var recipent = document.getElementById('input1').value → recipent
var url = "/mitre/9/lab/api/"+recipent+"/"+amount → recipent
var url = "/mitre/9/lab/api/"+recipent+"/"+amount → url
window.location.href = url → url
Validate and sanitize the user input before constructing the URL. First, ensure the recipient value contains only expected characters (e.g., alphanumeric). Second, encode the recipient parameter using encodeURIComponent() to prevent URL injection. Finally, consider using a safer approach like form submission with CSRF tokens instead of constructing URLs from user input.
function handleSubmit(){
var recipient = document.getElementById('input1').value;
var amount = document.getElementById('input2').value;
// Validate recipient contains only alphanumeric characters
if (!/^[a-zA-Z0-9]+$/.test(recipient)) {
alert('Invalid recipient');
return;
}
// URL encode the recipient parameter
var encodedRecipient = encodeURIComponent(recipient);
var url = "/mitre/9/lab/api/" + encodedRecipient + "/" + amount;
console.log(url);
window.location.href = url;
}
7842aadd4381introduction/templates/mitre/csrf_dashboard.html
21: function handleSubmit(){
22: var recipent = document.getElementById('input1').value
23: var amount = document.getElementById('input2').value
24: var url = "/mitre/9/lab/api/"+recipent+"/"+amount
25: console.log(url)
>>> 26: window.location.href = url
27: }
28: </script>
29: </div>
30:
31: {% endblock content %}
Layer 2 triggered: Same vulnerability as finding 1 (Open Redirect via unsanitized user input in window.location.href). The finding is a true positive for client-side security risk.
window.location.href = url
8bcb3184050fintroduction/templates/mitre/csrf_dashboard.html
(Source code not available)
Layer 2 triggered: Same vulnerability as findings 1 and 2 (XSS/Open Redirect via unsanitized user input in window.location.href). The finding is a true positive for client-side security risk.
f951cebed800Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
introduction/templates/Lab/XSS/xss_lab.html
Django template rendering 23: </div><br>
24: <div class="display">
25: {% if company %}
26: <h3>Company Name : <i>{{company}}</i></h3>
27: <h3>Ceo Name : <i>{{ceo}}</i></h3>
>>> 28: <h3>About : <i>{{about}}</i></h3>
29: {% elif query %}
30: <h3> The company '{{query|safe}}' You searched for is not Part of FAANG</h3>
31: {% else %}
32:
33: {% endif %}
Attackers can inject malicious JavaScript code that executes in victims' browsers, leading to session hijacking, credential theft, defacement, or redirection to malicious sites. The vulnerability is reflected XSS where user input is directly rendered without proper escaping.
q → query<input id="search" type="text" name="q" placeholder="Facebook"> → q
<h3> The company '{{query|safe}}' You searched for is not Part of FAANG</h3> → query
<h3> The company '{{query|safe}}' You searched for is not Part of FAANG</h3> → query
Remove the 'safe' filter from the template variable to enable Django's automatic HTML escaping. If HTML content is intentionally needed, use specific safe functions for limited HTML like bleach or mark_safe only after thorough validation.
<h3> The company '{{query}}' You searched for is not Part of FAANG</h3>
94d65c753364Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
introduction/templates/Lab/XSS/xss_lab_2.html
Django template rendering 23: </button>
24: </form>
25: <br>
26: <p>Hello, {{ username|safe }}</p>
27: <script>
>>> 28: function setCookie(name, value) {
29: document.cookie = name + "=" + value + ";path=/;";
30: }
31:
32: function getCookie(name) {
33: var name = name + "=";
Stored XSS vulnerability where attacker-controlled input is rendered without sanitization. Attackers can inject malicious scripts that execute when users view the page, potentially stealing cookies (including the 'flag' cookie), performing actions as the user, or defacing the page.
username<input type="text" class="form-control" id="username" name="username" required> → username
<p>Hello, {{ username|safe }}</p> → username
<p>Hello, {{ username|safe }}</p> → username
Remove the 'safe' filter and allow Django's automatic escaping. If HTML input is required, implement strict input validation and use a sanitization library like bleach to allow only safe HTML elements and attributes.
<p>Hello, {{ username }}</p>
e57b4e3ab6c4Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
introduction/templates/Lab/XSS/xss_lab_3.html
Django template rendering in script tag 23: </button>
24: </form>
25: <br>
26: <p>{{code}}</p>
27: <script>
>>> 28: // LAB 3 JS CODE
29: {{code}}
30: </script>
31: <br>
32: <div align="right">
33: <button class="btn btn-info" type="button" onclick="window.location.href='/xss'">Back to Lab Details</button>
Direct JavaScript code injection into a script tag context. Attackers can execute arbitrary JavaScript in victims' browsers, leading to complete client-side compromise including cookie theft, session hijacking, keylogging, and malicious redirects. The vulnerability is particularly dangerous as it injects directly into JavaScript execution context.
username → code<input type="text" class="form-control" id="username" name="username" required> → username
{{code}} → code
{{code}} → code
Never directly inject user input into JavaScript contexts. Use JSON serialization with proper escaping, or pass data via HTML data attributes and retrieve with JavaScript. Implement strict input validation and output encoding for JavaScript contexts.
<script>
// Use JSON encoding for data passing
var userData = JSON.parse('{{ code|escapejs }}');
// Or use data attributes
<div data-code="{{ code|escapejs }}"></div>
</script>
514091775c57Server-Side Request Forgery (SSRF)
introduction/templates/Lab/ssrf/ssrf_lab.html
HTML form input with file path 8: <div style="display:flex;flex-direction:row;align-items:center;margin:15px">
9: <form method="post" action="/ssrf_lab">
10: {% csrf_token %}
11: <input type="hidden" name="blog" value="templates/Lab/ssrf/blogs/blog1.txt">
12: <button type="submit" class="btn btn-info"> Blog1 </button>
>>> 13: </form>
14: <form method="post" action="/ssrf_lab">
15: {% csrf_token %}
16: <input type="hidden" name="blog" value="templates/Lab/ssrf/blogs/blog2.txt">
17: <button type="submit" class="btn btn-info"> Blog2 </button>
18: </form>
Based on the hint in the code discussion page, this appears to be part of an SSRF vulnerability where user-controlled file paths are used to read arbitrary files. Attackers could potentially read sensitive files like .env, configuration files, or system files by manipulating the 'blog' parameter value.
blog<input type="hidden" name="blog" value="templates/Lab/ssrf/blogs/blog1.txt"> → blog
<input type="hidden" name="blog" value="templates/Lab/ssrf/blogs/blog1.txt"> → blog
Implement strict allow-list validation for file paths. Never use user input directly in file operations. Use a mapping of allowed identifiers to predefined file paths, and validate against this mapping.
# In views.py
ALLOWED_BLOGS = {
'blog1': 'templates/Lab/ssrf/blogs/blog1.txt',
'blog2': 'templates/Lab/ssrf/blogs/blog2.txt',
'blog3': 'templates/Lab/ssrf/blogs/blog3.txt',
'blog4': 'templates/Lab/ssrf/blogs/blog4.txt'
}
def ssrf_lab(request):
if request.method == "POST":
blog_id = request.POST.get('blog')
if blog_id not in ALLOWED_BLOGS:
return render(request, "Lab/ssrf/ssrf_lab.html", {"blog": "Invalid blog"})
file_path = ALLOWED_BLOGS[blog_id]
# Safe file reading with path validation
...
fba21d5a6668Server-Side Request Forgery (SSRF)
introduction/templates/Lab/ssrf/ssrf_lab2.html
ssrf_lab2 13: elif request.method == "POST":
14: url = request.POST["url"]
15: try:
16: response = requests.get(url)
17: return render(request, "Lab/ssrf/ssrf_lab2.html", {"response": response.content.decode()})
>>> 18: except:
19: return render(request, "Lab/ssrf/ssrf_lab2.html", {"error": "Invalid URL"})
Attackers can make the server send HTTP requests to internal services, cloud metadata endpoints (169.254.169.254), localhost services, or arbitrary external systems. This can lead to information disclosure, internal network reconnaissance, or exploitation of internal services that are not exposed to the internet.
url<input type="text" class="form-control" id="url" name="url" placeholder="Enter URL"> → url
url = request.POST["url"] → url
response = requests.get(url) → url
1. Implement an allowlist of permitted domains or URL patterns. 2. Validate and sanitize user input using URL parsing libraries. 3. Use network-level restrictions to prevent access to internal IP ranges. 4. Implement proper error handling that doesn't leak internal information.
import re
from urllib.parse import urlparse
def is_allowed_url(url):
parsed = urlparse(url)
allowed_domains = ['example.com', 'trusted-site.org']
# Check if domain is in allowlist
if parsed.netloc not in allowed_domains:
return False
# Block internal IP addresses
internal_ips = re.compile(r'^(127\.|192\.168\.|10\.|172\.(1[6-9]|2[0-9]|3[0-1])\.|169\.254\.)')
if internal_ips.match(parsed.netloc):
return False
# Only allow HTTP/HTTPS
if parsed.scheme not in ['http', 'https']:
return False
return True
# Usage:
if is_allowed_url(user_url):
response = requests.get(user_url, timeout=5)
else:
return render(request, "error.html", {"error": "URL not allowed"})
f0075cc0b9a3Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
introduction/templates/Lab/ssrf/ssrf_lab2.html
render 12: {% if response %}
13: <div style="width:70%;overflow:scroll;background-color:#000">{{response | safe}}<div>
>>> 14: {% endif %}
15: </div>
16:
Attackers can inject malicious JavaScript code that executes in victims' browsers when they view the SSRF response. This can lead to session hijacking, credential theft, defacement, or malware distribution. Combined with SSRF, attackers can fetch malicious content from controlled servers and have it rendered as HTML.
response<input type="text" class="form-control" id="url" name="url" placeholder="Enter URL"> → url
return render(request, "Lab/ssrf/ssrf_lab2.html", {"response": response.content.decode()}) → response
{{response | safe}} → response
1. Remove the 'safe' filter and let Django auto-escape HTML by default. 2. If HTML content is expected, use a sanitization library like bleach. 3. Validate and sanitize the response content before rendering.
{% if response %}
<div style="width:70%;overflow:scroll;background-color:#000">
{{ response }}
</div>
{% endif %}
# In Python code:
import bleach
# Sanitize HTML before rendering
clean_response = bleach.clean(response.content.decode(), tags=['b', 'i', 'p', 'br'], strip=True)
return render(request, "template.html", {"response": clean_response})
6c0efc78c0f8introduction/apis.py
128: return JsonResponse({"message":"method not allowed"},status = 405)
129: try:
130: code = request.POST.get('code')
131: dirname = os.path.dirname(__file__)
132: filename = os.path.join(dirname, "playground/A6/utility.py")
>>> 133: f = open(filename,"w")
134: f.write(code)
135: f.close()
136: except:
137: return JsonResponse({"message":"missing code"},status = 400)
138: return JsonResponse({"message":"success"},status = 200)
The code writes user-controlled data (`request.POST.get('code')`) directly to a fixed file path. While the path is constructed using `os.path.join` with a base directory, the vulnerability is that an attacker can write arbitrary code to a server-side utility file (`playground/A6/utility.py`). This is a **Path Traversal** risk because the attacker could potentially control the file's content to execute malicious code on the server, representing a dangerous architectural flaw. Layer 1 safety is ab
open(filename
3bf2b39b1aacintroduction/playground/ssrf/test.py
7: else:
8: file=request.POST["blog"]
9: try :
10: dirname = os.path.dirname(__file__)
11: filename = os.path.join(dirname, file)
>>> 12: file = open(filename,"r")
13: data = file.read()
14: return render(request,"Lab/ssrf/ssrf_lab.html",{"blog":data})
15: except:
16: return render(request, "Lab/ssrf/ssrf_lab.html", {"blog": "No blog found"})
17: else:
Layer 1: No structural safety (user-controlled `file` variable flows into `open()`). Layer 2: Architectural flaw - Path Traversal vulnerability confirmed. User input from `request.POST["blog"]` is used to construct a file path without validation, leading to arbitrary file read.
open(filename
826e3a954f29introduction/playground/ssrf/main.py
3:
4: def ssrf_lab(file):
5: try:
6: dirname = os.path.dirname(__file__)
7: filename = os.path.join(dirname, file)
>>> 8: file = open(filename,"r")
9: data = file.read()
10: return {"blog":data}
11: except:
12: return {"blog": "No blog found"}
Layer 2 TP Trigger: The code shows a Path Traversal vulnerability. The `filename` variable is constructed by joining a base directory with a user-controlled `file` parameter. While `os.path.join` provides some safety on Unix-like systems, the overall pattern is a classic architectural flaw for path traversal if the `file` parameter contains directory traversal sequences (e.g., '../../etc/passwd'). The context does not show any validation or sanitization of the user input before it is passed to `
open(filename
4ff64b3ea9f6introduction/static/js/a9.js
35: let data = JSON.parse(result); // parse JSON string into object
36: console.log(data.logs);
37: document.getElementById("a9_d3").style.display = 'flex';
38: for (var i = 0; i < data.logs.length; i++) {
39: var li = document.createElement("li");
>>> 40: li.innerHTML = data.logs[i];
41: document.getElementById("a9_d3").appendChild(li);
42: }
43: })
44: .catch(error => console.log('error', error));
45: }
Layer 2 TP Trigger: Client-Side Security & Attack Chains. The finding shows direct assignment of untrusted data (`data.logs[i]`) to `innerHTML`. This is a classic DOM-based XSS vulnerability. The data originates from a parsed JSON response (`result`), which is attacker-controlled if the source is not strictly trusted. No sanitization or safe API (e.g., `textContent`) is used, creating a direct client-side attack chain.
.innerHTML =
3eee0d687b61dockerized_labs/insec_des_lab/templates/base.html
15: requestAnimationFrame(() => {
16: html.setAttribute('data-theme', newTheme);
17: localStorage.setItem('theme', newTheme);
18:
19: const themeToggle = document.querySelector('.theme-toggle');
>>> 20: themeToggle.innerHTML = newTheme === 'dark' ? '☀️' : '🌙';
21: });
22: }
23:
24: // Set theme on page load
25: document.addEventListener('DOMContentLoaded', () => {
Layer 2 TP Trigger: Client-Side Storage Risk. The finding's context shows `localStorage.setItem('theme', newTheme);` on line 17, storing user-controlled data (`newTheme`) in `localStorage`. While the immediate sink (`innerHTML` on line 20) uses a ternary operator with hardcoded emojis, the broader code pattern demonstrates a client-side storage mechanism for dynamic data. The threat model for XSS includes attackers exfiltrating or manipulating data stored in `localStorage`. Therefore, any findin
.innerHTML =
546da858ec07Container runs as root user, which is a security risk
Dockerfile
>>> # Dockerfile missing USER directive:
2: 1: FROM python:3.11.0b1-buster
3: 5: WORKDIR /app
4: 9: RUN apt-get update && apt-get install --no-install-recommends -y dnsutils=1:9.11.5.P4+dfsg-5.1+deb10u11 libpq-dev=11.16-0+deb10u1 python3-dev=3.7.3-1 && apt-get clean && rm -rf /var/lib/apt/lists/*
5: 13: ENV PYTHONDONTWRITEBYTECODE=1
6: 14: ENV PYTHONUNBUFFERED=1
Container runs as root user, which is a security risk
# Dockerfile missing USER directive:
1: FROM python:3.11.0b1-buster
5: WORKDIR /app
9: RUN apt-get update && apt-get install --no-install-recommends -y dnsutils=1:9.11.5.P4+dfsg-5.1+deb10u11 libpq-dev=11.16-0+deb10u1 python3-dev=3.7.3-1 && apt-get clean && rm -rf /var/lib/apt/lists/*
13: ENV PYTHONDONTWRITEBYTECODE=1
14: ENV PYTHONUNBUFFERED=1
Add 'USER nonroot' or 'USER 1000' to run as non-root user
Add 'USER nonroot' or 'USER 1000' to run as non-root user
d3c2b15a8826Using externally controlled strings in a command line may allow a malicious user to change the meaning of the command.
introduction/mitre.py
236:
237: @csrf_exempt
238: def mitre_lab_17_api(request):
239: if request.method == "POST":
240: ip = request.POST.get('ip')
>>> 241: command = "nmap " + ip
242: res, err = command_out(command)
243: res = res.decode()
244: err = err.decode()
245: pattern = "STATE SERVICE.*\\n\\n"
246: ports = re.findall(pattern, res,re.DOTALL)[0][14:-2].split('\n')
An attacker could execute arbitrary shell commands on the server by injecting command separators (like ;, &&, |) or nmap options, potentially leading to complete system compromise, data theft, or lateral movement within the network.
ControlFlowNode for request → ControlFlowNode for Attribute → ControlFlowNode for Attribute() → ControlFlowNode for ip → ControlFlowNode for ipdef mitre_lab_17_api(request): → ControlFlowNode for request
ip = request.POST.get('ip') → ControlFlowNode for Attribute
ip = request.POST.get('ip') → ControlFlowNode for Attribute()
ip = request.POST.get('ip') → ControlFlowNode for ip
command = "nmap " + ip → ControlFlowNode for ip
First, validate the input to ensure it contains only valid IP addresses or hostnames using a whitelist approach. Second, use subprocess with argument list instead of string concatenation to prevent command injection. Third, implement proper error handling for invalid inputs. Finally, consider using a Python nmap library instead of shell commands.
import subprocess
import re
from django.core.validators import validate_ipv4_address
from django.core.exceptions import ValidationError
@csrf_exempt
def mitre_lab_17_api(request):
if request.method == "POST":
ip = request.POST.get('ip')
try:
# Validate IP address format
validate_ipv4_address(ip)
# Use subprocess with argument list
result = subprocess.run(['nmap', ip],
capture_output=True,
text=True,
timeout=30)
res = result.stdout
err = result.stderr
pattern = "STATE SERVICE.*\\n\\n"
ports = re.findall(pattern, res,re.DOTALL)[0][14:-2].split('\n')
# ... rest of your code
except ValidationError:
return HttpResponse("Invalid IP address format", status=400)
except subprocess.TimeoutExpired:
return HttpResponse("Scan timed out", status=408)
d6619a8fd5afInsecure cookies may be sent in cleartext, which makes them vulnerable to interception.
introduction/views.py
280: passwd = request.POST['pass']
281: obj = authLogin.objects.create(name=name,username=user_name,password=passwd)
282: try:
283: rendered = render_to_string('Lab/AUTH/auth_success.html', {'username': obj.username,'userid':obj.userid,'name':obj.name,'err_msg':'Cookie Set'})
284: response = HttpResponse(rendered)
>>> 285: response.set_cookie('userid', obj.userid, max_age=31449600, samesite=None, secure=False)
286: print('Setting cookie successful')
287: return response
288: except:
289: render(request,'Lab/AUTH/auth_lab_signup.html',{'err_msg':'Cookie cannot be set'})
290: except:
Attackers could intercept the userid cookie via man-in-the-middle attacks on unencrypted HTTP connections, leading to session hijacking and account takeover. The current configuration also exposes the application to CSRF attacks due to the insecure SameSite setting.
response.set_cookie('userid', obj.userid, max_age=31449600, samesite=None, secure=False)
Set the 'secure' flag to True to ensure cookies are only transmitted over HTTPS connections. Also set 'samesite' to 'Strict' or 'Lax' to prevent CSRF attacks. Remove the 'samesite=None' parameter since it's incompatible with secure=False. Ensure your application is served exclusively over HTTPS.
response.set_cookie('userid', obj.userid, max_age=31449600, samesite='Strict', secure=True, httponly=True)
4de09e36456eCookies without the `HttpOnly` attribute set can be accessed by JS scripts, making them more vulnerable to XSS attacks.
introduction/views.py
280: passwd = request.POST['pass']
281: obj = authLogin.objects.create(name=name,username=user_name,password=passwd)
282: try:
283: rendered = render_to_string('Lab/AUTH/auth_success.html', {'username': obj.username,'userid':obj.userid,'name':obj.name,'err_msg':'Cookie Set'})
284: response = HttpResponse(rendered)
>>> 285: response.set_cookie('userid', obj.userid, max_age=31449600, samesite=None, secure=False)
286: print('Setting cookie successful')
287: return response
288: except:
289: render(request,'Lab/AUTH/auth_lab_signup.html',{'err_msg':'Cookie cannot be set'})
290: except:
An attacker could steal the user's session cookie via cross-site scripting (XSS) attacks, allowing them to impersonate the user and gain unauthorized access to their account without needing credentials.
response.set_cookie('userid', obj.userid, max_age=31449600, samesite=None, secure=False)
Add the `httponly=True` parameter to the `set_cookie()` call to prevent client-side JavaScript from accessing the cookie. Additionally, set `secure=True` if the application uses HTTPS (which it should in production) to prevent transmission over unencrypted connections. Consider also setting `samesite='Lax'` or `samesite='Strict'` for CSRF protection.
response.set_cookie('userid', obj.userid, max_age=31449600, httponly=True, secure=True, samesite='Lax')
006af1fe9a47Using externally controlled strings in a command line may allow a malicious user to change the meaning of the command.
introduction/views.py
412: domain=request.POST.get('domain')
413: domain=domain.replace("https://www.",'')
414: os=request.POST.get('os')
415: print(os)
416: if(os=='win'):
>>> 417: command="nslookup {}".format(domain)
418: else:
419: command = "dig {}".format(domain)
420:
421: try:
422: # output=subprocess.check_output(command,shell=True,encoding="UTF-8")
An attacker could execute arbitrary shell commands by injecting command separators (like ;, &&, |) or subcommands via the domain parameter, potentially leading to remote code execution, data exfiltration, or system compromise.
ControlFlowNode for request → ControlFlowNode for Attribute → ControlFlowNode for Attribute() → ControlFlowNode for domain → ControlFlowNode for domain → ControlFlowNode for domaindef cmd_lab(request): → ControlFlowNode for request
domain=request.POST.get('domain') → ControlFlowNode for Attribute
domain=request.POST.get('domain') → ControlFlowNode for Attribute()
domain=request.POST.get('domain') → ControlFlowNode for domain
domain=domain.replace("https://www.",'') → ControlFlowNode for domain
command="nslookup {}".format(domain) → ControlFlowNode for domain
First, validate the domain input using a whitelist of allowed characters (alphanumeric, hyphens, dots) or a proper domain regex pattern. Second, avoid using shell=True and instead pass the command as a list of arguments to subprocess.check_output. Third, use shlex.quote() to escape the domain parameter when constructing shell commands if shell=True cannot be avoided.
import re
import shlex
# Validate domain input
domain = request.POST.get('domain', '').strip()
domain = domain.replace("https://www.", '')
# Validate domain format
if not re.match(r'^[a-zA-Z0-9][a-zA-Z0-9.-]*[a-zA-Z0-9]$', domain):
return HttpResponse("Invalid domain format", status=400)
if os == 'win':
# Safe approach without shell=True
output = subprocess.check_output(['nslookup', domain], encoding="UTF-8")
else:
# Safe approach without shell=True
output = subprocess.check_output(['dig', domain], encoding="UTF-8")
6514067bc2baWhen checking a Hash over a message, a constant-time algorithm should be used. Otherwise, an attacker may be able to forge a valid Hash for an arbitrary message by running a timing attack if they can
introduction/views.py
1188: except:
1189: response = render(request, "Lab_2021/A7_auth_failure/lab3.html")
1190: response.set_cookie("session_id", None)
1191: return response
1192:
>>> 1193: if USER_A7_LAB3[username]['password'] == password:
1194: session_data = AF_session_id.objects.create(session_id=token, user=USER_A7_LAB3[username]['username'])
1195: session_data.save()
1196: response = render(request, "Lab_2021/A7_auth_failure/lab3.html", {"success":True, "failure":False, "username":username})
1197: response.set_cookie("session_id", token)
1198: return response
An attacker could perform a timing attack to gradually deduce the stored password hash by measuring response time differences. This could eventually allow them to authenticate as another user without knowing the exact password.
ControlFlowNode for Attribute() → ControlFlowNode for password → ControlFlowNode for passwordpassword = hashlib.sha256(password.encode()).hexdigest() → ControlFlowNode for Attribute()
password = hashlib.sha256(password.encode()).hexdigest() → ControlFlowNode for password
if USER_A7_LAB3[username]['password'] == password: → ControlFlowNode for password
Replace the direct string comparison with a constant-time comparison function. First, import a secure comparison function like `hmac.compare_digest()` from Python's `hmac` module. Then modify line 1193 to use this function instead of the `==` operator, ensuring the comparison time doesn't depend on the input values. This prevents attackers from inferring password similarities through timing differences.
import hmac # ... existing code ... if hmac.compare_digest(USER_A7_LAB3[username]['password'], password):
a30b9864129cUse of a non-constant-time verification routine to check the value of an secret, possibly allowing a timing attack to retrieve sensitive information.
introduction/views.py
754: 'Lab_2021/A1_BrokenAccessControl/broken_access_lab_1.html',
755: {
756: "data":"0NLY_F0R_4DM1N5",
757: "username": "admin"
758: })
>>> 759: elif (name=='jack' and password=='jacktheripper'): # Will implement hashing here
760: html = render(
761: request,
762: 'Lab_2021/A1_BrokenAccessControl/broken_access_lab_1.html',
763: {
764: "not_admin":"No Secret key for this User",
An attacker could perform a timing attack to gradually guess the password by measuring response time differences between correct and incorrect character matches, potentially compromising the 'jack' account without brute-forcing the entire password space.
ControlFlowNode for password → ControlFlowNode for passwordpassword = request.POST.get('pass') → ControlFlowNode for password
elif (name=='jack' and password=='jacktheripper'): # Will implement hashing here → ControlFlowNode for password
Replace the direct string comparison with a constant-time comparison function that compares all characters regardless of match position. Use Django's built-in `constant_time_compare()` from `django.utils.crypto` or Python's `hmac.compare_digest()`. First, implement password hashing using Django's authentication system with `make_password()` and `check_password()` functions. Then use constant-time comparison for any remaining direct string comparisons.
from django.contrib.auth.hashers import make_password, check_password
from django.utils.crypto import constant_time_compare
# Store hashed password during user creation
hashed_password = make_password('jacktheripper')
# In the view comparison:
elif name == 'jack' and check_password(password, hashed_password):
# Or for direct string comparison if needed:
# elif name == 'jack' and constant_time_compare(password, 'jacktheripper'):
b8a2a1e862aaUse of a non-constant-time verification routine to check the value of an secret, possibly allowing a timing attack to retrieve sensitive information.
introduction/views.py
794: {
795: "data":"0NLY_F0R_4DM1N5",
796: "username": "admin",
797: "status": "admin"
798: })
>>> 799: elif ( name=='jack' and password=='jacktheripper'): # Will implement hashing here
800: html = render(
801: request,
802: 'Lab_2021/A1_BrokenAccessControl/broken_access_lab_2.html',
803: {
804: "not_admin":"No Secret key for this User",
An attacker could perform a timing attack to gradually guess the secret password by measuring response time differences between correct and incorrect character matches, potentially gaining unauthorized admin access or user impersonation.
ControlFlowNode for password → ControlFlowNode for passwordpassword = request.POST.get('pass') → ControlFlowNode for password
elif ( name=='jack' and password=='jacktheripper'): # Will implement hashing here → ControlFlowNode for password
Replace the direct string comparison with a constant-time comparison function that compares all characters regardless of match position. Use Django's built-in `secrets.compare_digest()` or Python's `hmac.compare_digest()` for timing-safe comparison. Also implement proper password hashing instead of storing plaintext credentials in the code.
import hmac
# Replace lines 799-800 with:
elif name == 'jack' and hmac.compare_digest(password.encode('utf-8'), b'jacktheripper'):
# Will implement hashing here (also hash the password and compare hashes)
html = render(
request,
'Lab_2021/A1_BrokenAccessControl/broken_access_lab_2.html',
{
"not_admin":"No Secret key for this User",
# Also fix the admin comparison on line 794 similarly:
if name == 'admin' and hmac.compare_digest(password.encode('utf-8'), b'0NLY_F0R_4DM1N5'):
e54aa66219f8Use of a non-constant-time verification routine to check the value of an secret, possibly allowing a timing attack to retrieve sensitive information.
introduction/views.py
821: username = request.POST["username"]
822: password = request.POST["password"]
823:
824: if username == 'John' and password == 'reaper':
825: return render(request,'Lab_2021/A1_BrokenAccessControl/broken_access_lab_3.html', {'loggedin':True, 'admin': False})
>>> 826: elif username == 'admin' and password == 'admin_pass':
827: return render(request,'Lab_2021/A1_BrokenAccessControl/broken_access_lab_3.html', {'loggedin':True, 'admin': True})
828: return render(request, 'Lab_2021/A1_BrokenAccessControl/broken_access_lab_3.html', {'loggedin':False})
829:
830: def a1_broken_access_lab3_secret(request):
831: if not request.user.is_authenticated:
An attacker could perform a timing attack to determine the correct password by measuring response time differences between correct and incorrect characters, potentially gaining unauthorized admin access or learning valid credentials through statistical analysis.
ControlFlowNode for password → ControlFlowNode for passwordpassword = request.POST["password"] → ControlFlowNode for password
elif username == 'admin' and password == 'admin_pass': → ControlFlowNode for password
Replace the direct string comparison with a constant-time comparison function that doesn't leak timing information. Use Django's built-in `secrets.compare_digest()` or Python's `hmac.compare_digest()` for comparing the password. Also, move the password comparison to a separate function that compares both username and password in constant time, or restructure the logic to avoid early returns that reveal which comparison failed.
import hmac
# In the view function:
username = request.POST["username"]
password = request.POST["password"]
# Constant-time comparison for both username and password
if hmac.compare_digest(username, 'John') and hmac.compare_digest(password, 'reaper'):
return render(request, 'Lab_2021/A1_BrokenAccessControl/broken_access_lab_3.html', {'loggedin': True, 'admin': False})
elif hmac.compare_digest(username, 'admin') and hmac.compare_digest(password, 'admin_pass'):
return render(request, 'Lab_2021/A1_BrokenAccessControl/broken_access_lab_3.html', {'loggedin': True, 'admin': True})
return render(request, 'Lab_2021/A1_BrokenAccessControl/broken_access_lab_3.html', {'loggedin': False})
782038aae5edUse of a non-constant-time verification routine to check the value of an secret, possibly allowing a timing attack to retrieve sensitive information.
introduction/views.py
1060: return render(request,"Lab_2021/A2_Crypto_failur/crypto_failure_lab3.html")
1061: if request.method == "POST":
1062: username = request.POST["username"]
1063: password = request.POST["password"]
1064: try:
>>> 1065: if username == "User" and password == "P@$$w0rd":
1066: expire = datetime.datetime.now() + datetime.timedelta(minutes=60)
1067: cookie = f"{username}|{expire}"
1068: response = render(request,"Lab_2021/A2_Crypto_failur/crypto_failure_lab3.html",{"success":True, "failure":False , "admin":False})
1069: response.set_cookie("cookie", cookie)
1070: response.status_code = 200
An attacker could perform a timing attack to gradually guess the correct username and password by measuring response time differences, potentially gaining unauthorized access as the 'User' account.
ControlFlowNode for password → ControlFlowNode for passwordpassword = request.POST["password"] → ControlFlowNode for password
if username == "User" and password == "P@$$w0rd": → ControlFlowNode for password
Replace the direct string comparison with a constant-time comparison function. First, import Django's `constant_time_compare` from `django.utils.crypto`. Then use it to compare both the username and password separately. This ensures the comparison time doesn't reveal information about how many characters matched or which comparison failed.
from django.utils.crypto import constant_time_compare # In the view function: if constant_time_compare(username, "User") and constant_time_compare(password, "P@$$w0rd"):
8a529283437eA remote endpoint identifier is read from an HTTP header. Attackers can modify the value of the identifier to forge the client ip.
introduction/views.py
655: ip = x_forwarded_for.split(',')[0]
656: else:
657: ip = request.META.get('REMOTE_ADDR')
658:
659: if login.objects.filter(user=user,password=password):
>>> 660: if ip != '127.0.0.1':
661: logging.warning(f"{now}:{ip}:{user}")
662: logging.info(f"{now}:{ip}:{user}")
663: return render(request,"Lab/A10/a10_lab2.html",{"name":user})
664: else:
665: logging.error(f"{now}:{ip}:{user}")
An attacker could spoof their IP address to appear as localhost (127.0.0.1), potentially bypassing security logging or gaining unauthorized access if the IP check is used for authentication decisions elsewhere in the application.
ControlFlowNode for Attribute() → ControlFlowNode for x_forwarded_for → ControlFlowNode for ip → ControlFlowNode for ipx_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR') → ControlFlowNode for Attribute()
x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR') → ControlFlowNode for x_forwarded_for
ip = x_forwarded_for.split(',')[0] → ControlFlowNode for ip
if ip != '127.0.0.1': → ControlFlowNode for ip
First, remove the IP address check entirely since it's unreliable for security decisions. Instead, implement proper authentication and authorization controls. Second, if you need to log the client IP, use Django's built-in `get_client_ip()` method or a validated approach that checks trusted proxies. Never trust the X-Forwarded-For header without validating against a list of trusted proxies.
def get_client_ip(request):
x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR')
if x_forwarded_for:
# Get the first IP in the list
ip = x_forwarded_for.split(',')[0]
else:
ip = request.META.get('REMOTE_ADDR')
return ip
# In the login check code:
ip = get_client_ip(request)
# Remove the IP-based security check:
# if ip != '127.0.0.1': # REMOVE THIS LINE
# Instead, implement proper authentication:
if authenticate(username=user, password=password):
logging.info(f"{now}:{ip}:{user}")
return render(request, "Lab/A10/a10_lab2.html", {"name": user})
4efa375e64f4A remote endpoint identifier is read from an HTTP header. Attackers can modify the value of the identifier to forge the client ip.
introduction/views.py
938: if x_forwarded_for:
939: ip = x_forwarded_for.split(',')[0]
940: else:
941: ip = request.META.get('REMOTE_ADDR')
942:
>>> 943: if ip == '127.0.0.1':
944: return render(request,"Lab/ssrf/ssrf_target.html")
945: else:
946: return render(request,"Lab/ssrf/ssrf_target.html",{"access_denied":True})
947:
948: @authentication_decorator
An attacker could spoof the X-Forwarded-For header to make their request appear to come from localhost (127.0.0.1), bypassing IP-based access controls and gaining unauthorized access to resources intended only for localhost.
ControlFlowNode for Attribute() → ControlFlowNode for x_forwarded_for → ControlFlowNode for ip → ControlFlowNode for ipx_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR') → ControlFlowNode for Attribute()
x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR') → ControlFlowNode for x_forwarded_for
ip = x_forwarded_for.split(',')[0] → ControlFlowNode for ip
if ip == '127.0.0.1': → ControlFlowNode for ip
First, never trust the X-Forwarded-For header as it can be easily spoofed by clients. Instead, always use REMOTE_ADDR for the actual client IP address. If you need to check if a request came from localhost, compare REMOTE_ADDR directly to '127.0.0.1' without any header parsing. Remove the X-Forwarded-For logic entirely since it's unreliable for security decisions.
# Always use REMOTE_ADDR for security decisions - it cannot be spoofed
ip = request.META.get('REMOTE_ADDR')
if ip == '127.0.0.1':
return render(request, "Lab/ssrf/ssrf_target.html")
else:
return render(request, "Lab/ssrf/ssrf_target.html", {"access_denied": True})
e3399ce46da1Constructing cookies from user input may allow an attacker to perform a Cookie Poisoning attack.
dockerized_labs/broken_auth_lab/app.py
44: # Vulnerable: Insecure session management
45: session_token = base64.b64encode(f"{username}:{datetime.now()}".encode()).decode()
46:
47: if remember_me:
48: # Vulnerable: Insecure "Remember Me" implementation
>>> 49: response.set_cookie('session', session_token, max_age=30*24*60*60)
50: else:
51: response.set_cookie('session', session_token)
52:
53: return response
54:
An attacker could forge session tokens by predicting the pattern (username + timestamp) or manipulate cookies to impersonate users, leading to account takeover and unauthorized access to sensitive data.
ControlFlowNode for ImportMember → ControlFlowNode for request → ControlFlowNode for request → ControlFlowNode for Attribute → ControlFlowNode for Attribute() → ControlFlowNode for username → ControlFlowNode for session_token → ControlFlowNode for session_tokenfrom flask import Flask, render_template, request, redirect, url_for, make_response, flash → ControlFlowNode for ImportMember
from flask import Flask, render_template, request, redirect, url_for, make_response, flash → ControlFlowNode for request
username = request.form.get('username') → ControlFlowNode for request
username = request.form.get('username') → ControlFlowNode for Attribute
username = request.form.get('username') → ControlFlowNode for Attribute()
username = request.form.get('username') → ControlFlowNode for username
session_token = base64.b64encode(f"{username}:{datetime.now()}".encode()).decode() → ControlFlowNode for session_token
response.set_cookie('session', session_token, max_age=30*24*60*60) → ControlFlowNode for session_token
Replace the insecure session token generation with a cryptographically secure random token. Store the token server-side in a secure session store (like Flask-Session with server-side storage) and associate it with user data. Set secure cookie attributes: HttpOnly, Secure, SameSite=Strict, and use a proper session expiration mechanism instead of hardcoded max_age.
import secrets
from flask import session
# Replace lines 44-53 with:
session_token = secrets.token_urlsafe(32)
session['user_id'] = get_user_id(username) # Retrieve from database
session['authenticated'] = True
response = redirect('/dashboard')
if remember_me:
response.set_cookie('session_id', session_token,
max_age=30*24*60*60,
httponly=True,
secure=True, # Use True in production
samesite='Strict')
else:
response.set_cookie('session_id', session_token,
httponly=True,
secure=True,
samesite='Strict')
# Store session_token in database associated with user_id
79a933299608Use of a non-constant-time verification routine to check the value of an secret, possibly allowing a timing attack to retrieve sensitive information.
dockerized_labs/broken_auth_lab/app.py
36: def login():
37: username = request.form.get('username')
38: password = request.form.get('password')
39: remember_me = request.form.get('remember_me')
40:
>>> 41: if username in users and users[username]['password'] == password: # Vulnerable: Plain text password comparison
42: response = make_response(redirect(url_for('dashboard')))
43:
44: # Vulnerable: Insecure session management
45: session_token = base64.b64encode(f"{username}:{datetime.now()}".encode()).decode()
46:
An attacker could perform a timing attack to deduce valid usernames and passwords by measuring response time differences, potentially gaining unauthorized access to user accounts.
ControlFlowNode for password → ControlFlowNode for passwordpassword = request.form.get('password') → ControlFlowNode for password
if username in users and users[username]['password'] == password: # Vulnerable: Plain text password comparison → ControlFlowNode for password
Replace the plain text password comparison with a constant-time comparison function. First, import a secure comparison function like `hmac.compare_digest()` from Python's `hmac` module. Then, use this function to compare the provided password with the stored password hash instead of plain text. Additionally, ensure passwords are stored as hashes using a strong algorithm like bcrypt or Argon2.
import hmac
from werkzeug.security import check_password_hash
# In the login function:
if username in users and hmac.compare_digest(
users[username]['password'].encode('utf-8'),
password.encode('utf-8')
):
# Or better yet, use hashed passwords:
# if username in users and check_password_hash(users[username]['password_hash'], password):
3f85f09bb6f2Use of a non-constant-time verification routine to check the value of an secret, possibly allowing a timing attack to retrieve sensitive information.
dockerized_labs/broken_auth_lab/app.py
94: flash('Email not found')
95: return redirect(url_for('lab'))
96:
97: @app.route('/reset/<token>')
98: def reset_form(token):
>>> 99: if token in password_reset_tokens:
100: return render_template('reset.html', token=token)
101: return 'Invalid token'
102:
103: @app.route('/dashboard')
104: def dashboard():
An attacker could perform a timing attack to gradually guess valid password reset tokens by measuring response times, potentially hijacking user accounts by bypassing the password reset mechanism.
ControlFlowNode for token → ControlFlowNode for tokendef reset_form(token): → ControlFlowNode for token
if token in password_reset_tokens: → ControlFlowNode for token
Replace the direct dictionary membership check with a constant-time comparison. First, retrieve the expected token using a safe method that doesn't leak timing information, then compare using a constant-time comparison function like `hmac.compare_digest()` in Python. This ensures the comparison time doesn't depend on how many characters match between the input and stored tokens.
from flask import Flask, render_template, redirect, url_for, flash
import hmac
# ... existing code ...
@app.route('/reset/<token>')
def reset_form(token):
# Retrieve the expected token value safely
expected_token = password_reset_tokens.get('expected_key', '')
# Use constant-time comparison
if hmac.compare_digest(token, expected_token):
return render_template('reset.html', token=token)
return 'Invalid token'
703b2897c759Use of a non-constant-time verification routine to check the value of an secret, possibly allowing a timing attack to retrieve sensitive information.
dockerized_labs/broken_auth_lab/app.py
107: return redirect(url_for('lab'))
108:
109: try:
110: # Vulnerable: Insecure session validation
111: username = base64.b64decode(session_token).decode().split(':')[0]
>>> 112: if username in users:
113: return render_template('dashboard.html',
114: username=username,
115: role=users[username]['role'],
116: email=users[username]['email'])
117: except:
An attacker could enumerate valid usernames by measuring response time differences, enabling targeted brute-force attacks. This could lead to account takeover if weak passwords are used.
ControlFlowNode for session_token → ControlFlowNode for username → ControlFlowNode for usernamesession_token = request.cookies.get('session') → ControlFlowNode for session_token
username = base64.b64decode(session_token).decode().split(':')[0] → ControlFlowNode for username
if username in users: → ControlFlowNode for username
Replace the direct dictionary lookup with a constant-time comparison. First, use a cryptographic hash function (like SHA256) to hash the username from the session token, then compare it against pre-hashed usernames in the users dictionary. This ensures the comparison time doesn't reveal whether a username exists in the system. Additionally, validate the session token structure before processing.
try:
# Fixed: Constant-time username validation
decoded = base64.b64decode(session_token).decode()
username = decoded.split(':')[0]
# Use constant-time comparison
username_hash = hashlib.sha256(username.encode()).hexdigest()
# Check if hash exists in pre-computed user hashes
if username_hash in user_hashes:
actual_username = user_hashes[username_hash]
return render_template('dashboard.html',
username=actual_username,
role=users[actual_username]['role'],
email=users[actual_username]['email'])
except:
1eae3a0cc179Exposure of Sensitive Information to an Unauthorized Actor
dockerized_labs/broken_auth_lab/app.py
flash 81: # In a real application, this would send an email
82: # Vulnerable: Token exposed in response
83: flash(f'Password reset link: /reset/{token}')
>>> 84: return redirect(url_for('lab'))
85:
86: flash('Email not found')
87: return redirect(url_for('lab'))
Password reset tokens are exposed in flash messages visible to the user (and potentially logged). An attacker with access to the user's session or logs can capture the token and reset the password, leading to account takeover.
token token = hashlib.md5(f"{email}:{datetime.now()}".encode()).hexdigest() → token
flash(f'Password reset link: /reset/{token}') → token
flash(f'Password reset link: /reset/{token}') → token
Password reset tokens should be sent via a secure out-of-band channel (e.g., email). Never expose sensitive tokens in HTTP responses.
# Send token via email using a secure email service # Do not include token in flash messages or HTTP responses
98121ef219a9Active Debug Code
dockerized_labs/broken_auth_lab/app.py
Flask.run 106:
107: if __name__ == '__main__':
>>> 108: app.run(host='0.0.0.0', port=5000, debug=True) # Vulnerable: Debug mode enabled in production
109:
Debug mode exposes detailed error messages, stack traces, and interactive debugger (if enabled) to attackers. This can leak sensitive information about the application's internals, including source code paths, configuration details, and potential attack vectors.
debug app.run(host='0.0.0.0', port=5000, debug=True) → debug parameter
app.run(host='0.0.0.0', port=5000, debug=True) → debug parameter
Disable debug mode in production. Use environment variables to control debug settings.
import os
debug_mode = os.environ.get('FLASK_DEBUG', 'False').lower() == 'true'
app.run(host='0.0.0.0', port=5000, debug=debug_mode)
eafbcf693a92dockerized_labs/broken_auth_lab/app.py
81:
82: # Vulnerable: Password reset token generation
83: for username, user_data in users.items():
84: if user_data['email'] == email:
85: # Vulnerable: Predictable token generation
>>> 86: token = hashlib.md5(f"{email}:{datetime.now()}".encode()).hexdigest()
87: password_reset_tokens[token] = username
88:
89: # In a real application, this would send an email
90: # Vulnerable: Token exposed in response
91: flash(f'Password reset link: /reset/{token}')
Layer 2 TP Trigger: Architectural flaw - Use of cryptographically broken MD5 hash for security-sensitive token generation.
hashlib.md5(
fb383788728eUse of a non-constant-time verification routine to check the value of an secret, possibly allowing a timing attack to retrieve sensitive information.
introduction/playground/A9/api.py
12: return JsonResponse({"message":"normal get request", "method":"get"},status = 200)
13: if request.method == "POST":
14: username = request.POST['username']
15: password = request.POST['password']
16: L.info(f"POST request with username {username} and password {password}")
>>> 17: if username == "admin" and password == "admin":
18: return JsonResponse({"message":"Loged in successfully", "method":"post"},status = 200)
19: return JsonResponse({"message":"Invalid credentials", "method":"post"},status = 401)
20: if request.method == "PUT":
21: L.info("PUT request")
22: return JsonResponse({"message":"success", "method":"put"},status = 200)
An attacker could perform a timing attack to determine the correct username and password character-by-character by measuring response time differences, potentially gaining unauthorized administrative access to the system.
ControlFlowNode for password → ControlFlowNode for passwordpassword = request.POST['password'] → ControlFlowNode for password
if username == "admin" and password == "admin": → ControlFlowNode for password
Replace the direct string comparison with a constant-time comparison function. First, import a secure comparison function like `hmac.compare_digest()` from Python's standard library. Then modify the authentication check to compare both username and password using constant-time operations, ensuring the comparison time doesn't reveal information about which part of the credential is incorrect.
import hmac
# ... existing code ...
if request.method == "POST":
username = request.POST['username']
password = request.POST['password']
L.info(f"POST request with username {username} and password {password}")
# Constant-time comparison for both username and password
expected_username = "admin"
expected_password = "admin"
username_match = hmac.compare_digest(username.encode(), expected_username.encode())
password_match = hmac.compare_digest(password.encode(), expected_password.encode())
if username_match and password_match:
return JsonResponse({"message":"Logged in successfully", "method":"post"}, status=200)
return JsonResponse({"message":"Invalid credentials", "method":"post"}, status=401)
27c224e55677Including functionality from an untrusted source may allow an attacker to control the functionality and execute arbitrary code.
dockerized_labs/sensitive_data_exposure/templates/about.html
86: </div>
87: </div>
88: </div>
89: </div>
90:
>>> 91: <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
92: <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js"></script>
93: <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js"></script>
94: </body>
95: </html>
An attacker could compromise the CDN or perform a man-in-the-middle attack to inject malicious JavaScript code that executes in users' browsers, leading to session hijacking, data theft, or complete application compromise.
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
Download the jQuery library from the official source and host it locally within your application. Remove the external CDN reference and replace it with a local static file reference. Verify the integrity of the downloaded library using checksums from the official jQuery website. Update your static file serving configuration to include the local jQuery file.
<script src="{{ url_for('static', filename='js/jquery-3.5.1.slim.min.js') }}"></script>
<script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js"></script>
9cd0e236d7fbdockerized_labs/sensitive_data_exposure/templates/about.html
(Source code not available)
Layer 2 TP Trigger: Insecure Protocol/Configuration. This finding is a duplicate of the first, identifying the same line of code for loading jQuery from an external CDN without integrity checks. The architectural risk of including functionality from an untrusted source remains valid.
3fd6a893c85bIncluding functionality from an untrusted source may allow an attacker to control the functionality and execute arbitrary code.
dockerized_labs/sensitive_data_exposure/templates/base.html
55: {% endif %}
56:
57: {% block content %}{% endblock %}
58: </div>
59:
>>> 60: <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
61: <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js"></script>
62: <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js"></script>
63: {% block extra_scripts %}{% endblock %}
64: </body>
65: </html>
An attacker could compromise the CDN or perform a man-in-the-middle attack to inject malicious JavaScript code that executes in users' browsers, potentially leading to session hijacking, data theft, or malware distribution.
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
Download the jQuery library from the official source and serve it locally from your application's static files directory. Update the script tag to reference the local file instead of the external CDN. This ensures you control the integrity and availability of the JavaScript library.
<script src="{{ url_for('static', filename='js/jquery-3.5.1.slim.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/popper.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/bootstrap.min.js') }}"></script>
0aa6b88df449dockerized_labs/sensitive_data_exposure/templates/base.html
(Source code not available)
Layer 2 TP Trigger: Insecure Protocol/Configuration - Same finding as #1 but from different scanner. Loading scripts from external CDN without integrity checks creates dependency on untrusted third-party, enabling potential code injection via compromised CDN or MITM attacks.
c140db8dc017Including functionality from an untrusted source may allow an attacker to control the functionality and execute arbitrary code.
dockerized_labs/sensitive_data_exposure/templates/index.html
127: </div>
128: </div>
129: </div>
130: </div>
131:
>>> 132: <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
133: <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js"></script>
134: <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js"></script>
135: </body>
136: </html>
An attacker could compromise the CDN or perform a man-in-the-middle attack to inject malicious JavaScript code that executes in users' browsers, potentially leading to session hijacking, data theft, or complete application compromise.
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
Download the jQuery library from the official source and serve it locally from your application's static files directory. Update the script tag to reference the local file instead of the CDN URL. This ensures you control the source and integrity of the JavaScript library being loaded.
<script src="{{ url_for('static', filename='js/jquery-3.5.1.slim.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/popper.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/bootstrap.min.js') }}"></script>
9bf53fd5807edockerized_labs/sensitive_data_exposure/templates/index.html
(Source code not available)
Layer 2 TP Trigger: Insecure Protocol/Configuration. This finding is a duplicate of the first, also identifying the inclusion of a script from an external CDN without integrity checks. The architectural risk remains, making it a True Positive.
d36f722e98ddIncluding functionality from an untrusted source may allow an attacker to control the functionality and execute arbitrary code.
dockerized_labs/sensitive_data_exposure/templates/lesson.html
333: </div>
334: </div>
335:
336: <!-- Font awesome for icons -->
337: <script src="https://kit.fontawesome.com/a076d05399.js" crossorigin="anonymous"></script>
>>> 338: <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
339: <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js"></script>
340: <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js"></script>
341: </body>
342: </html>
An attacker could compromise the CDN or perform a man-in-the-middle attack to inject malicious JavaScript code that executes in users' browsers, potentially leading to session hijacking, data theft, or complete compromise of user accounts.
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
Download the jQuery library from the official source (jquery.com) and host it locally within your application. Replace the external CDN reference with a local file path. Ensure you verify the integrity of the downloaded file using checksums from the official source. Update the script tag to point to your local copy instead of the external CDN.
<!-- Replace the external jQuery CDN reference with local file --> <script src="/static/js/jquery-3.5.1.slim.min.js"></script> <!-- Optionally add integrity and crossorigin attributes if using a CDN you trust --> <!-- <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script> -->
e6cb6d279b1ddockerized_labs/sensitive_data_exposure/templates/lesson.html
(Source code not available)
Layer 2 triggered: Insecure Protocol/Configuration - Duplicate finding of same vulnerability as #1. Script loaded from CDN without integrity check creates supply chain risk. Both findings point to the same architectural flaw requiring remediation.
b9179a58f0dcIncluding functionality from an untrusted source may allow an attacker to control the functionality and execute arbitrary code.
dockerized_labs/sensitive_data_exposure/templates/login.html
77: </div>
78: </div>
79: </div>
80: </div>
81:
>>> 82: <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
83: <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js"></script>
84: <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js"></script>
85: </body>
86: </html>
An attacker could compromise the CDN or perform a man-in-the-middle attack to inject malicious JavaScript code that executes in users' browsers, potentially stealing credentials, session tokens, or performing actions on behalf of authenticated users.
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
Download the required JavaScript libraries (jQuery, Popper.js, Bootstrap) from their official sources and host them locally within your application. Remove the external CDN references and update the script tags to point to your local copies. This ensures you control the integrity and availability of these dependencies.
<script src="/static/js/jquery-3.5.1.slim.min.js"></script>
<script src="/static/js/popper.min.js"></script>
<script src="/static/js/bootstrap.min.js"></script>
fbb07cbc2bd6Writing user input directly to the DOM allows for a cross-site scripting vulnerability.
dockerized_labs/sensitive_data_exposure/templates/login.html
59: {{ form.username|safe }}
60: </div>
61: <div class="form-group">
62: <label for="id_password">Password:</label>
63: {{ form.password.errors }}
>>> 64: {{ form.password|safe }}
65: </div>
66: <button type="submit" class="btn btn-primary">Login</button>
67: </form>
68: <p class="mt-3">Don't have an account? <a href="{% url 'register' %}">Register here</a></p>
69:
An attacker could inject malicious JavaScript into the password field that would execute in users' browsers when viewing the login page, potentially stealing session cookies, credentials, or performing actions on behalf of authenticated users.
{{ form.password|safe }}
Remove the 'safe' filter from the form field rendering as it disables Django's automatic HTML escaping. Instead, use Django's built-in form rendering methods or explicitly escape the field value. Replace '{{ form.password|safe }}' with '{{ form.password }}' to allow Django's template engine to properly escape potentially dangerous characters.
<div class="form-group">
<label for="id_password">Password:</label>
{{ form.password.errors }}
{{ form.password }}
</div>
e2aceb2846ccdockerized_labs/sensitive_data_exposure/templates/login.html
(Source code not available)
Layer 2 TP Trigger: Insecure Protocol/Configuration. This finding is a duplicate of finding #1, identifying the same script loaded from a CDN without integrity checks. The risk of loading functionality from an untrusted external source without SRI is a valid TP for configuration flaws.
019212c1d46bdockerized_labs/sensitive_data_exposure/templates/profile.html
(Source code not available)
Layer 2: Architectural Safety - This is another duplicate of finding #2, identifying the clear-text storage of sensitive data (API key) in localStorage, which is a Client-Side Storage Risk.
ca5f77594680Including functionality from an untrusted source may allow an attacker to control the functionality and execute arbitrary code.
dockerized_labs/sensitive_data_exposure/templates/register.html
88: </div>
89: </div>
90: </div>
91: </div>
92:
>>> 93: <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
94: <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js"></script>
95: <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js"></script>
96: </body>
97: </html>
An attacker could compromise the CDN or perform a man-in-the-middle attack to inject malicious JavaScript code that executes in users' browsers, potentially stealing credentials, session tokens, or performing actions on behalf of authenticated users.
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
Download the jQuery library from the official source and serve it locally from your application's static directory. Remove the external CDN reference and replace it with a local path. Ensure you verify the integrity of the downloaded file using checksums from the official jQuery website.
<script src="{{ url_for('static', filename='js/jquery-3.5.1.slim.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/popper.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/bootstrap.min.js') }}"></script>
28f94d2f34f0dockerized_labs/sensitive_data_exposure/templates/register.html
(Source code not available)
Layer 2 triggered: Insecure Protocol/Configuration - Same finding as #1 but from different scanner. External CDN dependency without integrity checks represents a real architectural security risk, not a structural false positive.
04a825185cceCross-Site Request Forgery (CSRF)
introduction/templates/Lab_2021/A1_BrokenAccessControl/broken_access_lab_1.html
N/A 8: <h4 style="text-align:center"> Admins Have the Secretkey</h4>
9: <div class="login">
>>> 10: <form method="post" action="/broken_access_lab_1">
11:
12: <input id="input" type="text" name="name" placeholder="User Name"><br>
13: <input id="input" type="password" name="pass" placeholder="Password"><br>
14: <button style="margin-top:20px" class="btn btn-info" type="submit"> Log in</button>
15:
16:
17: </form>
18: </div>
Attackers can trick authenticated users into submitting malicious requests without their knowledge. This can lead to unauthorized actions being performed on behalf of the user, such as changing passwords, making transactions, or modifying account settings. The vulnerability affects multiple forms including broken_access_lab_1.html (line 10), broken_access_lab_2.html, and cryptographic failure labs.
<form method="post" action="/broken_access_lab_1"> → N/A
1. Add {% csrf_token %} to all POST forms. 2. Ensure Django's CSRF middleware is enabled. 3. Use CSRF protection for state-changing operations. 4. Implement same-site cookies.
<form method="post" action="/broken_access_lab_1">
{% csrf_token %}
<input id="input" type="text" name="name" placeholder="User Name"><br>
<input id="input" type="password" name="pass" placeholder="Password"><br>
<button style="margin-top:20px" class="btn btn-info" type="submit">Log in</button>
</form>
# In Django settings.py
MIDDLEWARE = [
# ...
'django.middleware.csrf.CsrfViewMiddleware',
# ...
]
2c1558f2ea8cExposure of Sensitive Information to an Unauthorized Actor
introduction/templates/Lab_2021/A1_BrokenAccessControl/secret.html
N/A 1: {% extends "introduction/base.html" %}
2: {% load static %}
3: {% block content %}
4: {% block title %}
5: <title>Cryptographic Failure</title>
>>> 6: {% endblock %}
7: SOME_SECRET_KEYS = THIS_FILE_CONTAINS_SECRET_INFORMATION
8:
9: {% endblock %}
Sensitive information including secret keys and configuration details are exposed in template files that may be accessible through directory traversal, misconfiguration, or source code disclosure. This can lead to complete compromise of the application if secrets like API keys, database credentials, or encryption keys are leaked.
SOME_SECRET_KEYS = THIS_FILE_CONTAINS_SECRET_INFORMATION → N/A
1. Remove all hardcoded secrets from template files. 2. Store secrets in environment variables or secure secret management systems. 3. Use Django's settings system with environment-specific configuration files. 4. Implement proper access controls to prevent unauthorized access to sensitive templates.
# Store secrets in environment variables
import os
SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY')
DATABASE_PASSWORD = os.environ.get('DB_PASSWORD')
API_KEY = os.environ.get('API_KEY')
# In Django settings.py
from django.core.exceptions import ImproperlyConfigured
def get_env_variable(var_name):
try:
return os.environ[var_name]
except KeyError:
error_msg = f"Set the {var_name} environment variable"
raise ImproperlyConfigured(error_msg)
SECRET_KEY = get_env_variable('DJANGO_SECRET_KEY')
6fcfc8719ccd/Users/jyothi/Projects/Test/pygoat/introduction/views.py
548: else:
549:
550: try :
551: file=request.FILES["file"]
552: try :
>>> 553: data = yaml.load(file,yaml.Loader)
554:
555: return render(request,"Lab/A9/a9_lab.html",{"data":data})
556: except:
557: return render(request, "Lab/A9/a9_lab.html", {"data": "Error"})
558:
Potential deserialization: 'request.headers.get' → 'yaml.load'
request.headers.get
yaml.load
bd3b79be02cb/Users/jyothi/Projects/Test/pygoat/introduction/views.py
572: return render (request,"Lab/A9/a9_lab2.html")
573: elif request.method == "POST":
574: try :
575: file=request.FILES["file"]
576: function_str = request.POST.get("function")
>>> 577: img = Image.open(file)
578: img = img.convert("RGB")
579: r,g,b = img.split()
580: # function_str = "convert(r+g, '1')"
581: output = ImageMath.eval(function_str,img = img, b=b, r=r, g=g)
582:
Potential path traversal: 'request.headers.get' → 'Image.open'
request.headers.get
Image.open
015cffe15a37/Users/jyothi/Projects/Test/pygoat/introduction/views.py
915: else:
916: file=request.POST["blog"]
917: try :
918: dirname = os.path.dirname(__file__)
919: filename = os.path.join(dirname, file)
>>> 920: file = open(filename,"r")
921: data = file.read()
922: return render(request,"Lab/ssrf/ssrf_lab.html",{"blog":data})
923: except:
924: return render(request, "Lab/ssrf/ssrf_lab.html", {"blog": "No blog found"})
925: else:
Cross-file flow: apis.py:30 -> views.py:920
ssrf_html_input_extractor
open
fd7a5f22c34a/Users/jyothi/Projects/Test/pygoat/dockerized_labs/broken_auth_lab/app.py
44: # Vulnerable: Insecure session management
45: session_token = base64.b64encode(f"{username}:{datetime.now()}".encode()).decode()
46:
47: if remember_me:
48: # Vulnerable: Insecure "Remember Me" implementation
>>> 49: response.set_cookie('session', session_token, max_age=30*24*60*60)
50: else:
51: response.set_cookie('session', session_token)
52:
53: return response
54:
Potential header injection: 'request.form.get' → 'response.set_cookie'
request.form.get
response.set_cookie
a689e28da718/Users/jyothi/Projects/Test/pygoat/dockerized_labs/broken_auth_lab/app.py
81:
82: # Vulnerable: Password reset token generation
83: for username, user_data in users.items():
84: if user_data['email'] == email:
85: # Vulnerable: Predictable token generation
>>> 86: token = hashlib.md5(f"{email}:{datetime.now()}".encode()).hexdigest()
87: password_reset_tokens[token] = username
88:
89: # In a real application, this would send an email
90: # Vulnerable: Token exposed in response
91: flash(f'Password reset link: /reset/{token}')
Potential weak hash: 'request.form.get' → 'hashlib.md5(f"{email}:{datetime.now()}".encode()).hexdigest'
request.form.get
hashlib.md5(f"{email}:{datetime.now()}".encode()).hexdigest
bcd90bd723a9/Users/jyothi/Projects/Test/pygoat/dockerized_labs/insec_des_lab/main.py
31: def deserialize_data():
32: try:
33: serialized_data = request.form.get('serialized_data', '')
34: decoded_data = base64.b64decode(serialized_data)
35: # Intentionally vulnerable deserialization, matching PyGoat
>>> 36: user = pickle.loads(decoded_data)
37:
38: if isinstance(user, User):
39: if user.is_admin:
40: message = f"Welcome Admin {user.username}! Here's the secret admin content: ADMIN_KEY_123"
41: else:
Potential deserialization: 'request.form.get' → 'pickle.loads'
request.form.get
pickle.loads
e4cf54b5bda5/Users/jyothi/Projects/Test/pygoat/introduction/playground/ssrf/main.py
2:
3:
4: def ssrf_lab(file):
5: try:
6: dirname = os.path.dirname(__file__)
>>> 7: filename = os.path.join(dirname, file)
8: file = open(filename,"r")
9: data = file.read()
10: return {"blog":data}
11: except:
12: return {"blog": "No blog found"}
Cross-file flow: apis.py:30 -> main.py:7
ssrf_html_input_extractor
os.path.join
2b1229034be1Image using latest tag
docker-compose.yml
>>> 12: image: pygoat/pygoat:latest
Image using latest tag
image: pygoat/pygoat:latest
Pin to specific image version
Pin to specific image version
c773f010794cdockerized_labs/sensitive_data_exposure/templates/profile.html
217: var userData = {
218: username: "{{ user.username }}",
219: apiKey: "{{ user_data.api_key }}" // Yeah it is necessary for this lab.
220: };
221:
>>> 222: localStorage.setItem('user_api_key', "{{ user_data.api_key }}");
223:
224: console.log("Sensitive data exposed in console - check browser dev tools!");
225:
226: // more bad practices
227: // function checkAdminStatus() {
Layer 2: Architectural Safety - This finding is a duplicate of finding #2, identifying the same insecure storage of a sensitive API key in localStorage. It is a Client-Side Storage Risk.
localStorage.setItem('user_api_key', "{{ user_data.api_key }}");
console.log("Sensi
bedb99158956introduction/templates/Lab/sec_mis/sec_mis_lab3.html
25: def sec_misconfig_lab3(request):</br>
26:  if not request.user.is_authenticated:</br>
27:   return redirect('login')</br>
28:  try:</br>
29:   cookie = request.COOKIES["auth_cookie"]</br>
>>> 30:   payload = jwt.decode(cookie, SECRET_COOKIE_KEY, algorithms=['HS256'])</br>
31:   if payload['user'] == 'admin':</br>
32:    return render(request,"Lab/sec_mis/sec_mis_lab3.html", {"admin":True} )</br>
33:  except:</br>
34:   payload = {</br>
35:    'user':'not_admin',</br>
Layer 2 triggered: The finding relates to a Client-Side Storage Risk. The JWT token is being read from a cookie (`request.COOKIES["auth_cookie"]`). If this cookie lacks the `HttpOnly` flag (not visible in the snippet but implied by the vulnerability type), it is accessible to client-side JavaScript, making it vulnerable to exfiltration via XSS. This is a modern client-side security risk.
jwt.decode(cookie, SECRET_COOKIE_KEY, algorithms=['HS256'])
2a1d38eb0233dockerized_labs/broken_auth_lab/templates/dashboard.html
74: </style>
75:
76: <script>
77: function logout() {
78: // Clear the session cookie and redirect to login
>>> 79: document.cookie = "session=; expires=Thu, 01 Jan 1970 00:00:00 UTC; path=/;";
80: window.location.href = "/lab";
81: }
82: </script>
83: {% endblock %}
Client-side JavaScript is directly manipulating a session cookie without the Secure flag, exposing it to potential interception over non-HTTPS connections. This is an architectural configuration flaw (Insecure Protocol/Configuration trigger).
document.cookie =