Nettacker code base major refactoring

This is a refactor of existing Nettacker code I've been working on recently. The (incomplete) list of changes:

          - add pre-commit checks
          - apply OOP approach to the application architecture
          - consolidate common modules logic into a base class
          - extract YAML parsing logic into a separate module
          - fix some typos
          - get rid of (not all) misused try/except blocks
          - migrate to poetry, remove requirements.* files
          - re-design configuration module
          - re-design logging module
          - split application logic into classes
          - use `pathlib` for path related manipulations
          - use context-based naming for variables, modules, directories, etc
          - use module level imports (vs function level)
          - use the base class for specific protocol libraries
This commit is contained in:
Arkadii Yakovets 2024-08-08 11:04:35 -07:00
parent 6baee0fd5e
commit 14f6c06207
No known key found for this signature in database
GPG Key ID: 7350E7F17DFE6846
803 changed files with 7114 additions and 5566 deletions

View File

@ -1,51 +0,0 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v4.1.2
- name: Build the code with docker
run: docker build . -t nettacker
- name: Lint
run: "docker run -e github_ci=true -v $(pwd):/usr/src/owaspnettacker --rm nettacker flake8 --extend-exclude
'*.txt,*.js,*.md,*.html' --count --select=E9,F63,F7,F82 --show-source"
- name: Help Menu
run: "docker run -e github_ci=true -v $(pwd):/usr/src/owaspnettacker --rm nettacker python nettacker.py --help"
- name: Help Menu in Persian
run: "docker run -e github_ci=true -v $(pwd):/usr/src/owaspnettacker --rm nettacker python nettacker.py --help
-L fa"
- name: Show all modules
run: "docker run -e github_ci=true -v $(pwd):/usr/src/owaspnettacker --rm nettacker python nettacker.py
--show-all-modules"
- name: Show all profiles
run: "docker run -e github_ci=true -v $(pwd):/usr/src/owaspnettacker --rm nettacker python nettacker.py --show-all-profiles"
- name: Test all modules command + check if it's finish successfully + csv
run: "docker run -e github_ci=true -v $(pwd):/usr/src/owaspnettacker --rm -i nettacker python nettacker.py
-i 127.0.0.1 -u user1,user2 -p pass1,pass2 -m all -g 21,25,80,443 -t 1000 -T 3 -o out.csv"
- name: Test all modules command + check if it's finish successfully + csv
run: "docker run -e github_ci=true -v $(pwd):/usr/src/owaspnettacker --rm -i nettacker python nettacker.py
-i 127.0.0.1 -u user1,user2 -p pass1,pass2 -m all -g 21,25,80,443 -t 1000 -T 3 -o out.csv --skip-service-discovery"
- name: Test all modules command + check if it's finish successfully + with graph + Persian
run: "docker run -e github_ci=true -v $(pwd):/usr/src/owaspnettacker --rm -i nettacker python nettacker.py -i
127.0.0.1 -L fa -u user1,user2 -p pass1,pass2 --profile all -g 21,25,80,443 -t 1000 -T 3
--graph d3_tree_v2_graph -v"
- name: Test all modules command + check if it's finish successfully + with graph + Persian
run: "docker run -e github_ci=true -v $(pwd):/usr/src/owaspnettacker --rm -i nettacker python nettacker.py -i
127.0.0.1 -L fa -u user1,user2 -p pass1,pass2 --profile all -g 21,25,80,443 -t 1000 -T 3
--graph d3_tree_v2_graph -v --skip-service-discovery"

179
.github/workflows/ci_cd.yml vendored Normal file
View File

@ -0,0 +1,179 @@
name: CI/CD
on: [push, pull_request]
jobs:
run-pytest:
name: Run pytest
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: |
python -m pip install --upgrade poetry
poetry install --with test
- name: Run tests
run: |
poetry run pytest
build-package:
name: Build package
needs: run-pytest
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade poetry
poetry install
- name: Build package
run: |
poetry build --no-interaction
- name: Upload package artifacts
uses: actions/upload-artifact@v3
with:
name: dist
path: dist
publish-to-test-pypi:
name: Publish to Test PyPI
if: |
${{
github.repository == 'owasp/nettacker' &&
github.event_name == 'push' &&
github.ref_name == 'implement-package-publishing-poc' }}
# environment: test
needs:
- build-package
permissions:
contents: read
id-token: write
runs-on: ubuntu-latest
steps:
- name: Get package artifacts
uses: actions/download-artifact@v3
with:
name: dist
path: dist
- name: Publish package distributions to Test PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
repository-url: https://test.pypi.org/legacy/
publish-to-pypi:
name: Publish to PyPI
if: |
${{ github.repository == 'owasp/nettacker' &&
github.event_name == 'push' &&
github.ref_name == 'implement-package-publishing-poc' }}
# environment: release
needs:
- build-package
permissions:
contents: read
id-token: write
runs-on: ubuntu-latest
steps:
- name: Get package artifacts
uses: actions/download-artifact@v3
with:
name: dist
path: dist
- name: Publish package distributions to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
test-docker-image:
name: Test Docker image
# needs: publish-to-pypi
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v4.1.2
- name: Build Docker image
run: docker build . -t nettacker
- name: Test help menu
run: |
docker run -e github_ci=true --rm nettacker \
poetry run python ./src/nettacker/run.py --help
- name: Test help menu in Persian
run: |
docker run -e github_ci=true --rm nettacker \
poetry run python ./src/nettacker/run.py --help -L fa
- name: Show all modules
run: |
docker run -e github_ci=true --rm nettacker \
poetry run python ./src/nettacker/run.py --show-all-modules
- name: Show all profiles
run: |
docker run -e github_ci=true --rm nettacker \
poetry run python ./src/nettacker/run.py --show-all-profiles
- name: Test all modules command + check if it's finish successfully + csv
run: |
docker run -e github_ci=true --rm -i nettacker \
poetry run python ./src/nettacker/run.py -i 127.0.0.1 -u user1,user2 -p pass1,pass2 -m all -g 21,25,80,443 \
-t 1000 -T 3 -o out.csv
- name: Test all modules command + check if it's finish successfully + csv
run: |
docker run -e github_ci=true --rm -i nettacker \
poetry run python ./src/nettacker/run.py -i 127.0.0.1 -u user1,user2 -p pass1,pass2 -m all -g 21,25,80,443 \
-t 1000 -T 3 -o out.csv --skip-service-discovery
- name: Test all modules command + check if it's finish successfully + with graph + Persian
run: |
docker run -e github_ci=true --rm -i nettacker \
poetry run python ./src/nettacker/run.py -i 127.0.0.1 -L fa -u user1,user2 -p pass1,pass2 --profile all \
-g 21,25,80,443 -t 1000 -T 3 --graph d3_tree_v2_graph -v
- name: Test all modules command + check if it's finish successfully + with graph + Persian
run: |
docker run -e github_ci=true --rm -i nettacker \
poetry run python ./src/nettacker/run.py -i 127.0.0.1 -L fa -u user1,user2 -p pass1,pass2 --profile all \
-g 21,25,80,443 -t 1000 -T 3 --graph d3_tree_v2_graph -v --skip-service-discovery
publish-to-docker-registry:
name: Publish Docker image
needs:
- test-docker-image
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4.1.0
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
push: true
tags: owasp/nettacker:dev

6
.gitignore vendored
View File

@ -7,8 +7,8 @@
*.code-workspace
#setup
build/*
dist/*
build
dist
*egg-info*
#tmp files
@ -27,4 +27,4 @@ results.*
.coverage
coverage.xml
venv/*
venv

26
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,26 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: check-ast
- id: check-builtin-literals
- id: check-yaml
- id: fix-encoding-pragma
args:
- --remove
- id: mixed-line-ending
args:
- --fix=lf
- repo: https://github.com/pycqa/isort
rev: 5.13.2
hooks:
- id: isort
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.1.13
hooks:
- id: ruff
args:
- --fix
- id: ruff-format

View File

@ -4,9 +4,8 @@ WORKDIR /usr/src/owaspnettacker
COPY . .
RUN mkdir -p .data/results
RUN apt-get update
RUN apt-get install -y $(cat requirements-apt-get.txt)
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
RUN pip3 install -r requirements-dev.txt
RUN apt-get install -y gcc libssl-dev
RUN pip3 install --upgrade poetry
RUN python -m poetry install
ENV docker_env=true
CMD [ "python3", "./nettacker.py" ]
CMD [ "poetry", "run", "python", "./run.py" ]

View File

@ -1,3 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
pass

143
config.py
View File

@ -1,143 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import sys
from core.time import now
from core.utility import generate_random_token
def nettacker_paths():
"""
home path for the framework (could be modify by user)
Returns:
a JSON contain the working, tmp and results path
"""
return {
"requirements_path": os.path.join(sys.path[0], 'requirements.txt'),
"requirements_dev_path": os.path.join(sys.path[0], 'requirements-dev.txt'),
"home_path": os.path.join(sys.path[0]),
"data_path": os.path.join(sys.path[0], '.data'),
"tmp_path": os.path.join(sys.path[0], '.data/tmp'),
"results_path": os.path.join(sys.path[0], '.data/results'),
"database_path": os.path.join(sys.path[0], '.data/nettacker.db'),
"version_file": os.path.join(sys.path[0], 'version.txt'),
"logo_file": os.path.join(sys.path[0], 'logo.txt'),
"messages_path": os.path.join(sys.path[0], 'lib/messages'),
"modules_path": os.path.join(sys.path[0], 'modules'),
"web_browser_user_agents": os.path.join(sys.path[0], 'lib/payloads/User-Agents/web_browsers_user_agents.txt'),
"web_static_files_path": os.path.join(sys.path[0], 'web/static'),
"payloads_path": os.path.join(sys.path[0], 'lib/payloads'),
"module_protocols_path": os.path.join(sys.path[0], 'core/module_protocols'),
}
def nettacker_api_config():
"""
API Config (could be modify by user)
Returns:
a JSON with API configuration
"""
return { # OWASP Nettacker API Default Configuration
"start_api_server": False,
"api_hostname": "0.0.0.0" if os.environ.get("docker_env") == "true" else "nettacker-api.z3r0d4y.com",
"api_port": 5000,
"api_debug_mode": False,
"api_access_key": generate_random_token(32),
"api_client_whitelisted_ips": [], # disabled - to enable please put an array with list of ips/cidr/ranges
# [
# "127.0.0.1",
# "10.0.0.0/24",
# "192.168.1.1-192.168.1.255"
# ],
"api_access_log": os.path.join(sys.path[0], '.data/nettacker.log'),
}
def nettacker_database_config():
"""
Database Config (could be modified by user)
For sqlite database:
fill the name of the DB as sqlite,
DATABASE as the name of the db user wants
other details can be left empty
For mysql users:
fill the name of the DB as mysql
DATABASE as the name of the database you want to create
USERNAME, PASSWORD, HOST and the PORT of the MySQL server
need to be filled respectively
Returns:
a JSON with Database configuration
"""
return {
"DB": "sqlite",
# "DB":"mysql", "DB": "postgres"
"DATABASE": nettacker_paths()["database_path"],
# Name of the database
"USERNAME": "",
"PASSWORD": "",
"HOST": "",
"PORT": ""
}
def nettacker_user_application_config():
"""
core framework default config (could be modify by user)
Returns:
a JSON with all user default configurations
"""
from core.compatible import version_info
return { # OWASP Nettacker Default Configuration
"language": "en",
"verbose_mode": False,
"verbose_event": False,
"show_version": False,
"report_path_filename": "{results_path}/results_{date_time}_{random_chars}.html".format(
results_path=nettacker_paths()["results_path"],
date_time=now(model="%Y_%m_%d_%H_%M_%S"),
random_chars=generate_random_token(10)
),
"graph_name": "d3_tree_v2_graph",
"show_help_menu": False,
"targets": None,
"targets_list": None,
"selected_modules": None,
"excluded_modules": None,
"usernames": None,
"usernames_list": None,
"passwords": None,
"passwords_list": None,
"ports": None,
"timeout": 3.0,
"time_sleep_between_requests": 0.0,
"scan_ip_range": False,
"scan_subdomains": False,
"skip_service_discovery": False,
"thread_per_host": 100,
"parallel_module_scan": 1,
"socks_proxy": None,
"retries": 1,
"ping_before_scan": False,
"profiles": None,
"set_hardware_usage": "maximum", # low, normal, high, maximum
"user_agent": "Nettacker {version_number} {version_code}".format(
version_number=version_info()[0], version_code=version_info()[1]
),
"show_all_modules": False,
"show_all_profiles": False,
"modules_extra_args": None
}
def nettacker_global_config():
return {
"nettacker_paths": nettacker_paths(),
"nettacker_api_config": nettacker_api_config(),
"nettacker_database_config": nettacker_database_config(),
"nettacker_user_application_config": nettacker_user_application_config()
}

View File

@ -1,2 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

View File

@ -1,225 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
from core import color
from core.messages import load_message
from core.time import now
message_cache = load_message().messages
def run_from_api():
"""
check if framework run from API to prevent any alert
Returns:
True if run from API otherwise False
"""
return "--start-api" in sys.argv
def verbose_mode_is_enabled():
return '--verbose' in sys.argv or '-v' in sys.argv
def event_verbose_mode_is_enabled():
return '--verbose-event' in sys.argv
def messages(msg_id):
"""
load a message from message library with specified language
Args:
msg_id: message id
Returns:
the message content in the selected language if
message found otherwise return message in English
"""
return message_cache[str(msg_id)]
def info(content):
"""
build the info message, log the message in database if requested,
rewrite the thread temporary file
Args:
content: content of the message
Returns:
None
"""
if not run_from_api():
sys.stdout.buffer.write(
bytes(
color.color("yellow")
+ "[{0}][+] ".format(now())
+ color.color("green")
+ content
+ color.color("reset")
+ "\n",
"utf8",
)
)
sys.stdout.flush()
def verbose_event_info(content):
"""
build the info message, log the message in database if requested,
rewrite the thread temporary file
Args:
content: content of the message
Returns:
None
"""
if (not run_from_api()) and (
verbose_mode_is_enabled() or event_verbose_mode_is_enabled()
): # prevent to stdout if run from API
sys.stdout.buffer.write(
bytes(
color.color("yellow")
+ "[{0}][+] ".format(now())
+ color.color("green")
+ content
+ color.color("reset")
+ "\n",
"utf8",
)
)
sys.stdout.flush()
def success_event_info(content):
"""
build the info message, log the message in database if requested,
rewrite the thread temporary file
Args:
content: content of the message
Returns:
None
"""
if not run_from_api():
sys.stdout.buffer.write(
bytes(
color.color("red")
+ "[{0}][+++] ".format(now())
+ color.color("cyan")
+ content
+ color.color("reset")
+ "\n",
"utf8",
)
)
sys.stdout.flush()
def verbose_info(content):
"""
build the info message, log the message in database if requested,
rewrite the thread temporary file
Args:
content: content of the message
Returns:
None
"""
if verbose_mode_is_enabled():
sys.stdout.buffer.write(
bytes(
color.color("yellow")
+ "[{0}][+] ".format(now())
+ color.color("purple")
+ content
+ color.color("reset")
+ "\n",
"utf8",
)
)
sys.stdout.flush()
def write(content):
"""
simple print a message
Args:
content: content of the message
Returns:
None
"""
if not run_from_api():
sys.stdout.buffer.write(
bytes(content, "utf8") if isinstance(content, str) else content
)
sys.stdout.flush()
def warn(content):
"""
build the warn message
Args:
content: content of the message
Returns:
the message in warn structure - None
"""
if not run_from_api():
sys.stdout.buffer.write(
bytes(
color.color("blue")
+ "[{0}][!] ".format(now())
+ color.color("yellow")
+ content
+ color.color("reset")
+ "\n",
"utf8",
)
)
sys.stdout.flush()
def error(content):
"""
build the error message
Args:
content: content of the message
Returns:
the message in error structure - None
"""
data = (
color.color("red")
+ "[{0}][X] ".format(now())
+ color.color("yellow")
+ content
+ color.color("reset")
+ "\n"
)
sys.stdout.buffer.write(data.encode("utf8"))
sys.stdout.flush()
def write_to_api_console(content):
"""
simple print a message in API mode
Args:
content: content of the message
Returns:
None
"""
sys.stdout.buffer.write(bytes(content, "utf8"))
sys.stdout.flush()

View File

@ -1,639 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import argparse
import sys
import json
from core.alert import write
from core.alert import warn
from core.alert import info
from core.alert import messages
from core.color import color
from core.compatible import version_info
from config import nettacker_global_config
from core.load_modules import load_all_languages
from core.utility import (application_language,
select_maximum_cpu_core)
from core.die import die_success
from core.die import die_failure
from core.color import reset_color
from core.load_modules import load_all_modules
from core.load_modules import load_all_graphs
from core.load_modules import load_all_profiles
def load_all_args():
"""
create the ARGS and help menu
Returns:
the parser, the ARGS
"""
nettacker_global_configuration = nettacker_global_config()
# Language Options
language = application_language()
languages_list = load_all_languages()
if language not in languages_list:
die_failure(
"Please select one of these languages {0}".format(
languages_list
)
)
reset_color()
# Start Parser
parser = argparse.ArgumentParser(prog="Nettacker", add_help=False)
# Engine Options
engineOpt = parser.add_argument_group(
messages("engine"), messages("engine_input")
)
engineOpt.add_argument(
"-L",
"--language",
action="store",
dest="language",
default=nettacker_global_configuration['nettacker_user_application_config']["language"],
help=messages("select_language").format(languages_list),
)
engineOpt.add_argument(
"-v",
"--verbose",
action="store_true",
dest="verbose_mode",
default=nettacker_global_configuration['nettacker_user_application_config']['verbose_mode'],
help=messages("verbose_mode"),
)
engineOpt.add_argument(
"--verbose-event",
action="store_true",
dest="verbose_event",
default=nettacker_global_configuration['nettacker_user_application_config']['verbose_event'],
help=messages("verbose_event"),
)
engineOpt.add_argument(
"-V",
"--version",
action="store_true",
default=nettacker_global_configuration['nettacker_user_application_config']['show_version'],
dest="show_version",
help=messages("software_version"),
)
engineOpt.add_argument(
"-o",
"--output",
action="store",
default=nettacker_global_configuration['nettacker_user_application_config']['report_path_filename'],
dest="report_path_filename",
help=messages("save_logs"),
)
engineOpt.add_argument(
"--graph",
action="store",
default=nettacker_global_configuration['nettacker_user_application_config']["graph_name"],
dest="graph_name",
help=messages("available_graph").format(load_all_graphs()),
)
engineOpt.add_argument(
"-h",
"--help",
action="store_true",
default=nettacker_global_configuration['nettacker_user_application_config']["show_help_menu"],
dest="show_help_menu",
help=messages("help_menu"),
)
# Target Options
target = parser.add_argument_group(
messages("target"), messages("target_input")
)
target.add_argument(
"-i",
"--targets",
action="store",
dest="targets",
default=nettacker_global_configuration['nettacker_user_application_config']["targets"],
help=messages("target_list"),
)
target.add_argument(
"-l",
"--targets-list",
action="store",
dest="targets_list",
default=nettacker_global_configuration['nettacker_user_application_config']["targets_list"],
help=messages("read_target"),
)
# Exclude Module Name
exclude_modules = list(load_all_modules(limit=10).keys())
exclude_modules.remove("all")
# Methods Options
modules = parser.add_argument_group(
messages("Method"), messages("scan_method_options")
)
modules.add_argument(
"-m",
"--modules",
action="store",
dest="selected_modules",
default=nettacker_global_configuration['nettacker_user_application_config']["selected_modules"],
help=messages("choose_scan_method").format(list(load_all_modules(limit=10).keys())),
)
modules.add_argument(
"--modules-extra-args",
action="store",
dest="modules_extra_args",
default=nettacker_global_configuration['nettacker_user_application_config']['modules_extra_args'],
help=messages("modules_extra_args_help")
)
modules.add_argument(
"--show-all-modules",
action="store_true",
dest="show_all_modules",
default=nettacker_global_configuration['nettacker_user_application_config']["show_all_modules"],
help=messages("show_all_modules"),
)
modules.add_argument(
"--profile",
action="store",
default=nettacker_global_configuration['nettacker_user_application_config']["profiles"],
dest="profiles",
help=messages("select_profile").format(list(load_all_profiles(limit=10).keys())),
)
modules.add_argument(
"--show-all-profiles",
action="store_true",
dest="show_all_profiles",
default=nettacker_global_configuration['nettacker_user_application_config']["show_all_profiles"],
help=messages("show_all_profiles"),
)
modules.add_argument(
"-x",
"--exclude-modules",
action="store",
dest="excluded_modules",
default=nettacker_global_configuration['nettacker_user_application_config']["excluded_modules"],
help=messages("exclude_scan_method").format(exclude_modules),
)
modules.add_argument(
"-u",
"--usernames",
action="store",
dest="usernames",
default=nettacker_global_configuration['nettacker_user_application_config']["usernames"],
help=messages("username_list"),
)
modules.add_argument(
"-U",
"--users-list",
action="store",
dest="usernames_list",
default=nettacker_global_configuration['nettacker_user_application_config']["usernames_list"],
help=messages("username_from_file"),
)
modules.add_argument(
"-p",
"--passwords",
action="store",
dest="passwords",
default=nettacker_global_configuration['nettacker_user_application_config']["passwords"],
help=messages("password_seperator"),
)
modules.add_argument(
"-P",
"--passwords-list",
action="store",
dest="passwords_list",
default=nettacker_global_configuration['nettacker_user_application_config']["passwords_list"],
help=messages("read_passwords"),
)
modules.add_argument(
"-g",
"--ports",
action="store",
dest="ports",
default=nettacker_global_configuration['nettacker_user_application_config']["ports"],
help=messages("port_seperator"),
)
modules.add_argument(
"--user-agent",
action="store",
dest="user_agent",
default=nettacker_global_configuration['nettacker_user_application_config']["user_agent"],
help=messages("select_user_agent"),
)
modules.add_argument(
"-T",
"--timeout",
action="store",
dest="timeout",
default=nettacker_global_configuration['nettacker_user_application_config']["timeout"],
type=float,
help=messages("read_passwords"),
)
modules.add_argument(
"-w",
"--time-sleep-between-requests",
action="store",
dest="time_sleep_between_requests",
default=nettacker_global_configuration['nettacker_user_application_config']["time_sleep_between_requests"],
type=float,
help=messages("time_to_sleep"),
)
modules.add_argument(
"-r",
"--range",
action="store_true",
default=nettacker_global_configuration['nettacker_user_application_config']["scan_ip_range"],
dest="scan_ip_range",
help=messages("range"),
)
modules.add_argument(
"-s",
"--sub-domains",
action="store_true",
default=nettacker_global_configuration['nettacker_user_application_config']["scan_subdomains"],
dest="scan_subdomains",
help=messages("subdomains"),
)
modules.add_argument(
"--skip-service-discovery",
action="store_true",
default=nettacker_global_configuration['nettacker_user_application_config']["skip_service_discovery"],
dest="skip_service_discovery",
help=messages("skip_service_discovery")
)
modules.add_argument(
"-t",
"--thread-per-host",
action="store",
default=nettacker_global_configuration['nettacker_user_application_config']["thread_per_host"],
type=int,
dest="thread_per_host",
help=messages("thread_number_connections"),
)
modules.add_argument(
"-M",
"--parallel-module-scan",
action="store",
default=nettacker_global_configuration['nettacker_user_application_config']["parallel_module_scan"],
type=int,
dest="parallel_module_scan",
help=messages("thread_number_modules"),
)
modules.add_argument(
"--set-hardware-usage",
action="store",
dest="set_hardware_usage",
default=nettacker_global_configuration['nettacker_user_application_config']['set_hardware_usage'],
help=messages("set_hardware_usage")
)
modules.add_argument(
"-R",
"--socks-proxy",
action="store",
dest="socks_proxy",
default=nettacker_global_configuration['nettacker_user_application_config']["socks_proxy"],
help=messages("outgoing_proxy"),
)
modules.add_argument(
"--retries",
action="store",
dest="retries",
type=int,
default=nettacker_global_configuration['nettacker_user_application_config']["retries"],
help=messages("connection_retries"),
)
modules.add_argument(
"--ping-before-scan",
action="store_true",
dest="ping_before_scan",
default=nettacker_global_configuration['nettacker_user_application_config']["ping_before_scan"],
help=messages("ping_before_scan"),
)
# API Options
api = parser.add_argument_group(
messages("API"),
messages("API_options")
)
api.add_argument(
"--start-api",
action="store_true",
dest="start_api_server",
default=nettacker_global_configuration['nettacker_api_config']["start_api_server"],
help=messages("start_api_server")
)
api.add_argument(
"--api-host",
action="store",
dest="api_hostname",
default=nettacker_global_configuration['nettacker_api_config']["api_hostname"],
help=messages("API_host")
)
api.add_argument(
"--api-port",
action="store",
dest="api_port",
default=nettacker_global_configuration['nettacker_api_config']["api_port"],
help=messages("API_port")
)
api.add_argument(
"--api-debug-mode",
action="store_true",
dest="api_debug_mode",
default=nettacker_global_configuration['nettacker_api_config']["api_debug_mode"],
help=messages("API_debug")
)
api.add_argument(
"--api-access-key",
action="store",
dest="api_access_key",
default=nettacker_global_configuration['nettacker_api_config']["api_access_key"],
help=messages("API_access_key")
)
api.add_argument(
"--api-client-whitelisted-ips",
action="store",
dest="api_client_whitelisted_ips",
default=nettacker_global_configuration['nettacker_api_config']["api_client_whitelisted_ips"],
help=messages("define_whie_list")
)
api.add_argument(
"--api-access-log",
action="store",
dest="api_access_log",
default=nettacker_global_configuration['nettacker_api_config']["api_access_log"],
help=messages("API_access_log_file")
)
api.add_argument(
"--api-cert",
action="store",
dest="api_cert",
help=messages("API_cert")
)
api.add_argument(
"--api-cert-key",
action="store",
dest="api_cert_key",
help=messages("API_cert_key")
)
# Return Options
return parser
def check_all_required(parser, api_forms=None):
"""
check all rules and requirements for ARGS
Args:
parser: parser from argparse
api_forms: values from API
Returns:
all ARGS with applied rules
"""
# Checking Requirements
options = parser.parse_args() if not api_forms else api_forms
modules_list = load_all_modules(full_details=True)
profiles_list = load_all_profiles()
# Check Help Menu
if options.show_help_menu:
parser.print_help()
write("\n\n")
write(messages("license"))
die_success()
# Check version
if options.show_version:
info(
messages("current_version").format(
color("yellow"),
version_info()[0],
color("reset"),
color("cyan"),
version_info()[1],
color("reset"),
color("green"),
)
)
die_success()
if options.show_all_modules:
messages("loading_modules")
for module in modules_list:
info(
messages("module_profile_full_information").format(
color('cyan'),
module,
color('green'),
", ".join(
[
"{key}: {value}".format(
key=key, value=modules_list[module][key]
) for key in modules_list[module]
]
)
)
)
die_success()
if options.show_all_profiles:
messages("loading_profiles")
for profile in profiles_list:
info(
messages("module_profile_full_information").format(
color('cyan'),
profile,
color('green'),
", ".join(profiles_list[profile])
)
)
die_success()
# API mode
if options.start_api_server:
if '--start-api' in sys.argv and api_forms:
die_failure(messages("cannot_run_api_server"))
from api.engine import start_api_server
if options.api_client_whitelisted_ips:
if type(options.api_client_whitelisted_ips) == str:
options.api_client_whitelisted_ips = options.api_client_whitelisted_ips.split(',')
whielisted_ips = []
for ip in options.api_client_whitelisted_ips:
from core.ip import (is_single_ipv4,
is_single_ipv6,
is_ipv4_cidr,
is_ipv6_range,
is_ipv6_cidr,
is_ipv4_range,
generate_ip_range)
if is_single_ipv4(ip) or is_single_ipv6(ip):
whielisted_ips.append(ip)
elif is_ipv4_range(ip) or is_ipv6_range(ip) or is_ipv4_cidr(ip) or is_ipv6_cidr(ip):
whielisted_ips += generate_ip_range(ip)
options.api_client_whitelisted_ips = whielisted_ips
start_api_server(options)
# Check the target(s)
if not (options.targets or options.targets_list) or (options.targets and options.targets_list):
parser.print_help()
write("\n")
die_failure(messages("error_target"))
if options.targets:
options.targets = list(set(options.targets.split(",")))
if options.targets_list:
try:
options.targets = list(set(open(options.targets_list, "rb").read().decode().split()))
except Exception:
die_failure(
messages("error_target_file").format(
options.targets_list
)
)
# check for modules
if not (options.selected_modules or options.profiles):
die_failure(messages("scan_method_select"))
if options.selected_modules:
if options.selected_modules == 'all':
options.selected_modules = list(set(modules_list.keys()))
options.selected_modules.remove('all')
else:
options.selected_modules = list(set(options.selected_modules.split(',')))
for module_name in options.selected_modules:
if module_name not in modules_list:
die_failure(
messages("scan_module_not_found").format(
module_name
)
)
if options.profiles:
if not options.selected_modules:
options.selected_modules = []
if options.profiles == 'all':
options.selected_modules = list(set(modules_list.keys()))
options.selected_modules.remove('all')
else:
options.profiles = list(set(options.profiles.split(',')))
for profile in options.profiles:
if profile not in profiles_list:
die_failure(
messages("profile_404").format(
profile
)
)
for module_name in profiles_list[profile]:
if module_name not in options.selected_modules:
options.selected_modules.append(module_name)
# threading & processing
if options.set_hardware_usage not in ['low', 'normal', 'high', 'maximum']:
die_failure(
messages("wrong_hardware_usage")
)
options.set_hardware_usage = select_maximum_cpu_core(options.set_hardware_usage)
options.thread_per_host = int(options.thread_per_host)
if not options.thread_per_host >= 1:
options.thread_per_host = 1
options.parallel_module_scan = int(options.parallel_module_scan)
if not options.parallel_module_scan >= 1:
options.parallel_module_scan = 1
# Check for excluding modules
if options.excluded_modules:
options.excluded_modules = options.excluded_modules.split(",")
if 'all' in options.excluded_modules:
die_failure(messages("error_exclude_all"))
for excluded_module in options.excluded_modules:
if excluded_module in options.selected_modules:
del options.selected_modules[excluded_module]
# Check port(s)
if options.ports:
tmp_ports = []
for port in options.ports.split(","):
try:
if "-" in port:
for port_number in range(int(port.split('-')[0]), int(port.split('-')[1]) + 1):
if port_number not in tmp_ports:
tmp_ports.append(port_number)
else:
if int(port) not in tmp_ports:
tmp_ports.append(int(port))
except Exception:
die_failure(messages("ports_int"))
options.ports = tmp_ports
if options.user_agent == 'random_user_agent':
options.user_agents = open(
nettacker_global_config()['nettacker_paths']['web_browser_user_agents']
).read().split('\n')
# Check user list
if options.usernames:
options.usernames = list(set(options.usernames.split(",")))
elif options.usernames_list:
try:
options.usernames = list(set(open(options.usernames_list).read().split("\n")))
except Exception:
die_failure(
messages("error_username").format(options.usernames_list)
)
# Check password list
if options.passwords:
options.passwords = list(set(options.passwords.split(",")))
elif options.passwords_list:
try:
options.passwords = list(set(open(options.passwords_list).read().split("\n")))
except Exception:
die_failure(
messages("error_passwords").format(options.passwords_list)
)
# Check output file
try:
temp_file = open(options.report_path_filename, "w")
temp_file.close()
except Exception:
die_failure(
messages("file_write_error").format(options.report_path_filename)
)
# Check Graph
if options.graph_name:
if options.graph_name not in load_all_graphs():
die_failure(
messages("graph_module_404").format(options.graph_name)
)
if not (options.report_path_filename.endswith(".html") or options.report_path_filename.endswith(".htm")):
warn(messages("graph_output"))
options.graph_name = None
# check modules extra args
if options.modules_extra_args:
all_args = {}
for args in options.modules_extra_args.split("&"):
value = args.split('=')[1]
if value.lower() == 'true':
value = True
elif value.lower() == 'false':
value = False
elif '.' in value:
try:
value = float(value)
except Exception as _:
del _
elif '{' in value or '[' in value:
try:
value = json.loads(value)
except Exception as _:
del _
else:
try:
value = int(value)
except Exception as _:
del _
all_args[args.split('=')[0]] = value
options.modules_extra_args = all_args
options.timeout = float(options.timeout)
options.time_sleep_between_requests = float(options.time_sleep_between_requests)
options.retries = int(options.retries)
return options

View File

@ -1,41 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
def reset_color():
"""
reset the color of terminal before exit
"""
sys.stdout.write("\033[0m")
def color(color_name):
"""
color_names for terminal and windows cmd
Args:
color_name: color name
Returns:
color_name values or empty string
"""
if color_name == "reset":
return "\033[0m"
elif color_name == "grey":
return "\033[1;30m"
elif color_name == "red":
return "\033[1;31m"
elif color_name == "green":
return "\033[1;32m"
elif color_name == "yellow":
return "\033[1;33m"
elif color_name == "blue":
return "\033[1;34m"
elif color_name == "purple":
return "\033[1;35m"
elif color_name == "cyan":
return "\033[1;36m"
elif color_name == "white":
return "\033[1;37m"
return ""

View File

@ -1,148 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
import os
import json
from core.die import die_failure
from core import color
def version_info():
"""
version information of the framework
Returns:
an array of version and code name
"""
from config import nettacker_paths
return open(nettacker_paths()['version_file']).read().split()
def logo():
"""
OWASP Nettacker Logo
"""
from core.alert import write_to_api_console
from core import color
from core.color import reset_color
from config import nettacker_paths
from config import nettacker_user_application_config
write_to_api_console(
open(
nettacker_paths()['logo_file']
).read().format(
version_info()[0],
version_info()[1],
color.color('red'),
color.color('reset'),
color.color('yellow'),
color.color('reset'),
color.color('cyan'),
color.color('reset'),
color.color('cyan'),
color.color('reset'),
color.color('cyan'),
color.color('reset')
)
)
reset_color()
def python_version():
"""
version of python
Returns:
integer version of python (2 or 3)
"""
return int(sys.version_info[0])
def os_name():
"""
OS name
Returns:
OS name in string
"""
return sys.platform
def check_dependencies():
if python_version() == 2:
sys.exit(color.color("red") + "[X] " + color.color("yellow") + "Python2 is No longer supported!" + color.color(
"reset"))
# check os compatibility
from config import nettacker_paths, nettacker_database_config
external_modules = open(nettacker_paths()["requirements_path"]).read().split('\n')
for module_name in external_modules:
try:
__import__(
module_name.split('==')[0] if 'library_name=' not in module_name
else module_name.split('library_name=')[1].split()[0]
)
except Exception:
if 'is_optional=true' not in module_name:
sys.exit(
color.color("red") + "[X] " + color.color("yellow") + "pip3 install -r requirements.txt ---> " +
module_name.split('#')[0].strip() + " not installed!" + color.color("reset")
)
logo()
from core.alert import messages
if not ('linux' in os_name() or 'darwin' in os_name()):
die_failure(messages("error_platform"))
if not os.path.exists(nettacker_paths()["home_path"]):
try:
os.mkdir(nettacker_paths()["home_path"])
os.mkdir(nettacker_paths()["tmp_path"])
os.mkdir(nettacker_paths()["results_path"])
except Exception:
die_failure("cannot access the directory {0}".format(
nettacker_paths()["home_path"])
)
if not os.path.exists(nettacker_paths()["tmp_path"]):
try:
os.mkdir(nettacker_paths()["tmp_path"])
except Exception:
die_failure("cannot access the directory {0}".format(
nettacker_paths()["results_path"])
)
if not os.path.exists(nettacker_paths()["results_path"]):
try:
os.mkdir(nettacker_paths()["results_path"])
except Exception:
die_failure("cannot access the directory {0}".format(
nettacker_paths()["results_path"])
)
if nettacker_database_config()["DB"] == "sqlite":
try:
if not os.path.isfile(nettacker_paths()["database_path"]):
from database.sqlite_create import sqlite_create_tables
sqlite_create_tables()
except Exception:
die_failure("cannot access the directory {0}".format(
nettacker_paths()["home_path"])
)
elif nettacker_database_config()["DB"] == "mysql":
try:
from database.mysql_create import (
mysql_create_tables,
mysql_create_database
)
mysql_create_database()
mysql_create_tables()
except Exception:
die_failure(messages("database_connection_failed"))
elif nettacker_database_config()["DB"] == "postgres":
try:
from database.postgres_create import postgres_create_database
postgres_create_database()
except Exception:
die_failure(messages("database_connection_failed"))
else:
die_failure(messages("invalid_database"))

View File

@ -1,191 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import json
import csv
import texttable
import html
from core.alert import messages
from core.alert import info, write
from core.compatible import version_info
from core.time import now
from core.die import die_failure
from database.db import get_logs_by_scan_unique_id
from database.db import submit_report_to_db
from core.utility import merge_logs_to_list
def build_graph(graph_name, events):
"""
build a graph
Args:
graph_name: graph name
events: list of events
Returns:
graph in HTML type
"""
info(messages("build_graph"))
try:
start = getattr(
__import__(
'lib.graph.{0}.engine'.format(
graph_name.rsplit('_graph')[0]
),
fromlist=['start']
),
'start'
)
except Exception:
die_failure(
messages("graph_module_unavailable").format(graph_name)
)
info(messages("finish_build_graph"))
return start(
events
)
def build_texttable(events):
"""
value['date'], value["target"], value['module_name'], value['scan_unique_id'],
value['options'], value['event']
build a text table with generated event related to the scan
:param events: all events
:return:
array [text table, event_number]
"""
_table = texttable.Texttable()
table_headers = [
'date',
'target',
'module_name',
'port',
'logs'
]
_table.add_rows(
[
table_headers
]
)
for event in events:
log = merge_logs_to_list(json.loads(event["json_event"]), [])
_table.add_rows(
[
table_headers,
[
event['date'],
event['target'],
event['module_name'],
event['port'],
"\n".join(log) if log else "Detected"
]
]
)
return _table.draw().encode('utf8') + b'\n\n' + messages("nettacker_version_details").format(
version_info()[0],
version_info()[1],
now()
).encode('utf8') + b"\n"
def create_report(options, scan_unique_id):
"""
sort all events, create log file in HTML/TEXT/JSON and remove old logs
Args:
options: parsing options
scan_unique_id: scan unique id
Returns:
True if success otherwise None
"""
all_scan_logs = get_logs_by_scan_unique_id(scan_unique_id)
if not all_scan_logs:
info(messages("no_events_for_report"))
return True
report_path_filename = options.report_path_filename
if (
len(report_path_filename) >= 5 and report_path_filename[-5:] == '.html'
) or (
len(report_path_filename) >= 4 and report_path_filename[-4:] == '.htm'
):
if options.graph_name:
html_graph = build_graph(options.graph_name, all_scan_logs)
else:
html_graph = ''
from lib.html_log import log_data
html_table_content = log_data.table_title.format(
html_graph,
log_data.css_1,
'date',
'target',
'module_name',
'port',
'logs',
'json_event'
)
index=1
for event in all_scan_logs:
log = merge_logs_to_list(json.loads(event["json_event"]), [])
html_table_content += log_data.table_items.format(
event["date"],
event["target"],
event["module_name"],
event["port"],
"<br>".join(log) if log else "Detected", #event["event"], #log
index,
html.escape(event["json_event"])
)
index+=1
html_table_content += log_data.table_end + '<div id="json_length">' + str(index-1) + '</div>' + '<p class="footer">' + messages("nettacker_version_details").format(
version_info()[0],
version_info()[1],
now()
) + '</p>' + log_data.json_parse_js
with open(report_path_filename, 'w', encoding='utf-8') as save:
save.write(html_table_content + '\n')
save.close()
elif len(report_path_filename) >= 5 and report_path_filename[-5:] == '.json':
with open(report_path_filename, 'w', encoding='utf-8') as save:
save.write(
str(
json.dumps(all_scan_logs)
) + '\n'
)
save.close()
elif len(report_path_filename) >= 5 and report_path_filename[-4:] == '.csv':
keys = all_scan_logs[0].keys()
with open(report_path_filename, 'a') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=keys)
writer.writeheader()
for log in all_scan_logs:
dict_data = {
key: value for key, value in log.items() if key in keys
}
writer.writerow(dict_data)
csvfile.close()
else:
with open(report_path_filename, 'wb') as save:
save.write(
build_texttable(all_scan_logs)
)
save.close()
write(build_texttable(all_scan_logs))
submit_report_to_db(
{
"date": now(model=None),
"scan_unique_id": scan_unique_id,
"options": vars(options),
}
)
info(messages("file_saved").format(report_path_filename))
return True

View File

@ -1,328 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import copy
import os
import socket
import yaml
import time
import json
from glob import glob
from io import StringIO
from core.socks_proxy import set_socks_proxy
class NettackerModules:
def __init__(self, options, module_name, scan_unique_id, process_number, thread_number, total_number_threads):
from config import nettacker_paths
self.module_name = module_name
self.process_number = process_number
self.module_thread_number = thread_number
self.total_module_thread_number = total_number_threads
self.module_inputs = vars(options)
if options.modules_extra_args:
for module_extra_args in self.module_inputs['modules_extra_args']:
self.module_inputs[module_extra_args] = \
self.module_inputs['modules_extra_args'][module_extra_args]
self.scan_unique_id = scan_unique_id
self.target = options.target
self.skip_service_discovery = options.skip_service_discovery
self.discovered_services = None
self.ignored_core_modules = [
'subdomain_scan',
'icmp_scan',
'port_scan'
]
self.service_discovery_signatures = list(set(yaml.load(
StringIO(
open(nettacker_paths()['modules_path'] + '/scan/port.yaml').read().format(
**{'target': 'dummy'}
)
),
Loader=yaml.FullLoader
)['payloads'][0]['steps'][0]['response']['conditions'].keys()))
self.libraries = [
module_protocol.split('.py')[0] for module_protocol in
os.listdir(nettacker_paths()['module_protocols_path']) if
module_protocol.endswith('.py') and module_protocol != '__init__.py'
]
def load(self):
from config import nettacker_paths
from core.utility import find_and_replace_configuration_keys
from database.db import find_events
self.module_content = find_and_replace_configuration_keys(
yaml.load(
StringIO(
open(
nettacker_paths()['modules_path'] +
'/' +
self.module_name.split('_')[-1].split('.yaml')[0] +
'/' +
'_'.join(self.module_name.split('_')[:-1]) +
'.yaml',
'r'
).read().format(
**self.module_inputs
)
),
Loader=yaml.FullLoader
),
self.module_inputs
)
if not self.skip_service_discovery and self.module_name not in self.ignored_core_modules:
services = {}
for service in find_events(self.target, 'port_scan', self.scan_unique_id):
service_event = json.loads(service.json_event)
port = service_event['ports']
protocols = service_event['response']['conditions_results'].keys()
for protocol in protocols:
if 'core_' + protocol in self.libraries and protocol:
if protocol in services:
services[protocol].append(port)
else:
services[protocol] = [port]
self.discovered_services = copy.deepcopy(services)
index_payload = 0
for payload in copy.deepcopy(self.module_content['payloads']):
if payload['library'] not in self.discovered_services and \
payload['library'] in self.service_discovery_signatures:
del self.module_content['payloads'][index_payload]
index_payload -= 1
else:
index_step = 0
for step in copy.deepcopy(
self.module_content['payloads'][index_payload]['steps']
):
find_and_replace_configuration_keys(
step,
{
"ports": self.discovered_services[payload['library']]
}
)
self.module_content['payloads'][index_payload]['steps'][index_step] = step
index_step += 1
index_payload += 1
def generate_loops(self):
from core.utility import expand_module_steps
self.module_content['payloads'] = expand_module_steps(self.module_content['payloads'])
def sort_loops(self):
steps = []
for index in range(len(self.module_content['payloads'])):
for step in copy.deepcopy(self.module_content['payloads'][index]['steps']):
if 'dependent_on_temp_event' not in step[0]['response']:
steps.append(step)
for step in copy.deepcopy(self.module_content['payloads'][index]['steps']):
if 'dependent_on_temp_event' in step[0]['response'] and \
'save_to_temp_events_only' in step[0]['response']:
steps.append(step)
for step in copy.deepcopy(self.module_content['payloads'][index]['steps']):
if 'dependent_on_temp_event' in step[0]['response'] and \
'save_to_temp_events_only' not in step[0]['response']:
steps.append(step)
self.module_content['payloads'][index]['steps'] = steps
def start(self):
from terminable_thread import Thread
from core.utility import wait_for_threads_to_finish
active_threads = []
from core.alert import warn
from core.alert import verbose_event_info
from core.alert import messages
# counting total number of requests
total_number_of_requests = 0
for payload in self.module_content['payloads']:
if 'core_' + payload['library'] not in self.libraries:
warn(messages("library_not_supported").format(payload['library']))
return None
for step in payload['steps']:
total_number_of_requests += len(step)
request_number_counter = 0
for payload in self.module_content['payloads']:
protocol = getattr(
__import__(
'core.module_protocols.core_{library}'.format(library=payload['library']),
fromlist=['Engine']
),
'Engine'
)
for step in payload['steps']:
for sub_step in step:
thread = Thread(
target=protocol.run,
args=(
sub_step,
self.module_name,
self.target,
self.scan_unique_id,
self.module_inputs,
self.process_number,
self.module_thread_number,
self.total_module_thread_number,
request_number_counter,
total_number_of_requests
)
)
thread.name = f"{self.target} -> {self.module_name} -> {sub_step}"
request_number_counter += 1
verbose_event_info(
messages("sending_module_request").format(
self.process_number,
self.module_name,
self.target,
self.module_thread_number,
self.total_module_thread_number,
request_number_counter,
total_number_of_requests
)
)
thread.start()
time.sleep(self.module_inputs['time_sleep_between_requests'])
active_threads.append(thread)
wait_for_threads_to_finish(
active_threads,
maximum=self.module_inputs['thread_per_host'],
terminable=True
)
wait_for_threads_to_finish(
active_threads,
maximum=None,
terminable=True
)
def load_all_graphs():
"""
load all available graphs
Returns:
an array of graph names
"""
from config import nettacker_paths
graph_names = []
for graph_library in glob(os.path.join(nettacker_paths()['home_path'] + '/lib/graph/*/engine.py')):
graph_names.append(graph_library.split('/')[-2] + '_graph')
return list(set(graph_names))
def load_all_languages():
"""
load all available languages
Returns:
an array of languages
"""
languages_list = []
from config import nettacker_paths
for language in glob(os.path.join(nettacker_paths()['home_path'] + '/lib/messages/*.yaml')):
languages_list.append(language.split('/')[-1].split('.')[0])
return list(set(languages_list))
def load_all_modules(limit=-1, full_details=False):
"""
load all available modules
limit: return limited number of modules
full: with full details
Returns:
an array of all module names
"""
# Search for Modules
from config import nettacker_paths
from core.utility import sort_dictionary
if full_details:
import yaml
module_names = {}
for module_name in glob(os.path.join(nettacker_paths()['modules_path'] + '/*/*.yaml')):
libname = module_name.split('/')[-1].split('.')[0]
category = module_name.split('/')[-2]
module_names[libname + '_' + category] = yaml.load(
StringIO(
open(
nettacker_paths()['modules_path'] +
'/' +
category +
'/' +
libname +
'.yaml',
'r'
).read().split('payload:')[0]
),
Loader=yaml.FullLoader
)['info'] if full_details else None
if len(module_names) == limit:
module_names['...'] = {}
break
module_names = sort_dictionary(module_names)
module_names['all'] = {}
return module_names
def load_all_profiles(limit=-1):
"""
load all available profiles
Returns:
an array of all profile names
"""
from core.utility import sort_dictionary
all_modules_with_details = load_all_modules(limit=limit, full_details=True)
profiles = {}
if '...' in all_modules_with_details:
del all_modules_with_details['...']
del all_modules_with_details['all']
for key in all_modules_with_details:
for tag in all_modules_with_details[key]['profiles']:
if tag not in profiles:
profiles[tag] = []
profiles[tag].append(key)
else:
profiles[tag].append(key)
if len(profiles) == limit:
profiles = sort_dictionary(profiles)
profiles['...'] = []
profiles['all'] = []
return profiles
profiles = sort_dictionary(profiles)
profiles['all'] = []
return profiles
def perform_scan(options, target, module_name, scan_unique_id, process_number, thread_number, total_number_threads):
from core.alert import (verbose_event_info,
messages)
socket.socket, socket.getaddrinfo = set_socks_proxy(options.socks_proxy)
options.target = target
validate_module = NettackerModules(
options,
module_name,
scan_unique_id,
process_number,
thread_number,
total_number_threads
)
validate_module.load()
validate_module.generate_loops()
validate_module.sort_loops()
validate_module.start()
verbose_event_info(
messages("finished_parallel_module_scan").format(
process_number,
module_name,
target,
thread_number,
total_number_threads
)
)
return os.EX_OK

View File

@ -1,36 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import yaml
from io import StringIO
def load_yaml(filename):
return yaml.load(
StringIO(
open(filename, 'r').read()
),
Loader=yaml.FullLoader
)
class load_message:
def __init__(self):
from core.utility import application_language
from config import nettacker_global_config
self.language = application_language()
self.messages = load_yaml(
"{messages_path}/{language}.yaml".format(
messages_path=nettacker_global_config()['nettacker_paths']['messages_path'],
language=self.language
)
)
if self.language != 'en':
self.messages_en = load_yaml(
"{messages_path}/en.yaml".format(
messages_path=nettacker_global_config()['nettacker_paths']['messages_path']
)
)
for message_id in self.messages_en:
if message_id not in self.messages:
self.messages[message_id] = self.messages_en[message_id]

View File

@ -1,2 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

View File

@ -1,93 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import copy
import ftplib
# from core.utility import reverse_and_regex_condition
from core.utility import process_conditions
from core.utility import get_dependent_results_from_database
from core.utility import replace_dependent_values
# def response_conditions_matched(sub_step, response):
# return response
class NettackFTPLib:
def ftp_brute_force(host, ports, usernames, passwords, timeout):
ftp_connection = ftplib.FTP(timeout=int(timeout))
ftp_connection.connect(host, int(ports))
ftp_connection.login(usernames, passwords)
ftp_connection.close()
return {
"host": host,
"username": usernames,
"password": passwords,
"port": ports
}
def ftps_brute_force(host, ports, usernames, passwords, timeout):
ftp_connection = ftplib.FTP_TLS(timeout=int(timeout))
ftp_connection.connect(host, int(ports))
ftp_connection.login(usernames, passwords)
ftp_connection.close()
return {
"host": host,
"username": usernames,
"password": passwords,
"port": ports
}
class Engine:
def run(
sub_step,
module_name,
target,
scan_unique_id,
options,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
):
backup_method = copy.deepcopy(sub_step['method'])
backup_response = copy.deepcopy(sub_step['response'])
del sub_step['method']
del sub_step['response']
if 'dependent_on_temp_event' in backup_response:
temp_event = get_dependent_results_from_database(
target,
module_name,
scan_unique_id,
backup_response['dependent_on_temp_event']
)
sub_step = replace_dependent_values(
sub_step,
temp_event
)
action = getattr(NettackFTPLib, backup_method, None)
for _ in range(options['retries']):
try:
response = action(**sub_step)
break
except Exception as _:
response = []
sub_step['method'] = backup_method
sub_step['response'] = backup_response
sub_step['response']['conditions_results'] = response
# sub_step['response']['conditions_results'] = response_conditions_matched(sub_step, response)
return process_conditions(
sub_step,
module_name,
target,
scan_unique_id,
options,
response,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
)

View File

@ -1,197 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re
import aiohttp
import asyncio
import copy
import random
import time
from core.utility import reverse_and_regex_condition
from core.utility import process_conditions
from core.utility import get_dependent_results_from_database
from core.utility import replace_dependent_values
from core.utility import replace_dependent_response
async def perform_request_action(action, request_options):
start_time = time.time()
async with action(**request_options) as response:
return {
"reason": response.reason,
"status_code": str(response.status),
"content": await response.content.read(),
"headers": dict(response.headers),
"responsetime": time.time() - start_time
}
async def send_request(request_options, method):
async with aiohttp.ClientSession() as session:
action = getattr(session, method, None)
response = await asyncio.gather(
*[
asyncio.ensure_future(
perform_request_action(action, request_options)
)
]
)
return response[0]
def response_conditions_matched(sub_step, response):
if not response:
return {}
condition_type = sub_step['response']['condition_type']
conditions = sub_step['response']['conditions']
condition_results = {}
for condition in conditions:
if condition in ['reason', 'status_code', 'content']:
regex = re.findall(re.compile(conditions[condition]['regex']), response[condition])
reverse = conditions[condition]['reverse']
condition_results[condition] = reverse_and_regex_condition(regex, reverse)
if condition == 'headers':
# convert headers to case insensitive dict
for key in response["headers"].copy():
response['headers'][key.lower()] = response['headers'][key]
condition_results['headers'] = {}
for header in conditions['headers']:
reverse = conditions['headers'][header]['reverse']
try:
regex = re.findall(
re.compile(conditions['headers'][header]['regex']),
response['headers'][header.lower()] if header.lower() in response['headers'] else False
)
condition_results['headers'][header] = reverse_and_regex_condition(regex, reverse)
except TypeError:
condition_results['headers'][header] = []
if condition == 'responsetime':
if len(conditions[condition].split()) == 2 and conditions[condition].split()[0] in [
"==",
"!=",
">=",
"<=",
">",
"<"
]:
exec(
"condition_results['responsetime'] = response['responsetime'] if (" +
"response['responsetime'] {0} float(conditions['responsetime'].split()[-1])".format(
conditions['responsetime'].split()[0]
) +
") else []"
)
else:
condition_results['responsetime'] = []
if condition_type.lower() == "or":
# if one of the values are matched, it will be a string or float object in the array
# we count False in the array and if it's not all []; then we know one of the conditions is matched.
if (
'headers' not in condition_results and
(
list(condition_results.values()).count([]) != len(list(condition_results.values()))
)
) or (
'headers' in condition_results and
(
len(list(condition_results.values())) +
len(list(condition_results['headers'].values())) -
list(condition_results.values()).count([]) -
list(condition_results['headers'].values()).count([]) -
1 != 0
)
):
if sub_step['response'].get('log', False):
condition_results['log'] = sub_step['response']['log']
if 'response_dependent' in condition_results['log']:
condition_results['log'] = replace_dependent_response(condition_results['log'], condition_results)
return condition_results
else:
return {}
if condition_type.lower() == "and":
if [] in condition_results.values() or \
('headers' in condition_results and [] in condition_results['headers'].values()):
return {}
else:
if sub_step['response'].get('log', False):
condition_results['log'] = sub_step['response']['log']
if 'response_dependent' in condition_results['log']:
condition_results['log'] = replace_dependent_response(condition_results['log'], condition_results)
return condition_results
return {}
class Engine:
def run(
sub_step,
module_name,
target,
scan_unique_id,
options,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
):
backup_method = copy.deepcopy(sub_step['method'])
backup_response = copy.deepcopy(sub_step['response'])
backup_iterative_response_match = copy.deepcopy(
sub_step['response']['conditions'].get('iterative_response_match', None))
if options['user_agent'] == 'random_user_agent':
sub_step['headers']['User-Agent'] = random.choice(options['user_agents'])
del sub_step['method']
if 'dependent_on_temp_event' in backup_response:
temp_event = get_dependent_results_from_database(
target,
module_name,
scan_unique_id,
backup_response['dependent_on_temp_event']
)
sub_step = replace_dependent_values(
sub_step,
temp_event
)
backup_response = copy.deepcopy(sub_step['response'])
del sub_step['response']
for _ in range(options['retries']):
try:
response = asyncio.run(send_request(sub_step, backup_method))
response['content'] = response['content'].decode(errors="ignore")
break
except Exception:
response = []
sub_step['method'] = backup_method
sub_step['response'] = backup_response
if backup_iterative_response_match != None:
backup_iterative_response_match = copy.deepcopy(
sub_step['response']['conditions'].get('iterative_response_match'))
del sub_step['response']['conditions']['iterative_response_match']
sub_step['response']['conditions_results'] = response_conditions_matched(sub_step, response)
if backup_iterative_response_match != None and (
sub_step['response']['conditions_results'] or sub_step['response']['condition_type'] == 'or'):
sub_step['response']['conditions']['iterative_response_match'] = backup_iterative_response_match
for key in sub_step['response']['conditions']['iterative_response_match']:
result = response_conditions_matched(
sub_step['response']['conditions']['iterative_response_match'][key], response)
if result:
sub_step['response']['conditions_results'][key] = result
return process_conditions(
sub_step,
module_name,
target,
scan_unique_id,
options,
response,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
)

View File

@ -1,87 +0,0 @@
#!/usr/bin/env core_pop3.py
# -*- coding: utf-8 -*-
import copy
import poplib
from core.utility import process_conditions
from core.utility import get_dependent_results_from_database
from core.utility import replace_dependent_values
class NettackPOP3Lib:
def pop3_brute_force(host, ports, usernames, passwords, timeout):
server = poplib.POP3(host, port=ports, timeout=timeout)
server.user(usernames)
server.pass_(passwords)
server.quit()
return {
"host": host,
"username": usernames,
"password": passwords,
"port": ports
}
def pop3_ssl_brute_force(host, ports, usernames, passwords, timeout):
server = poplib.POP3_SSL(host, port=ports, timeout=timeout)
server.user(usernames)
server.pass_(passwords)
server.quit()
return {
"host": host,
"username": usernames,
"password": passwords,
"port": ports
}
class Engine:
def run(
sub_step,
module_name,
target,
scan_unique_id,
options,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
):
backup_method = copy.deepcopy(sub_step['method'])
backup_response = copy.deepcopy(sub_step['response'])
del sub_step['method']
del sub_step['response']
if 'dependent_on_temp_event' in backup_response:
temp_event = get_dependent_results_from_database(
target,
module_name,
scan_unique_id,
backup_response['dependent_on_temp_event']
)
sub_step = replace_dependent_values(
sub_step,
temp_event
)
action = getattr(NettackPOP3Lib, backup_method, None)
for _ in range(options['retries']):
try:
response = action(**sub_step)
break
except Exception as _:
response = []
sub_step['method'] = backup_method
sub_step['response'] = backup_response
sub_step['response']['conditions_results'] = response
return process_conditions(
sub_step,
module_name,
target,
scan_unique_id,
options,
response,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
)

View File

@ -1,92 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import copy
import smtplib
# from core.utility import reverse_and_regex_condition
from core.utility import process_conditions
from core.utility import get_dependent_results_from_database
from core.utility import replace_dependent_values
# def response_conditions_matched(sub_step, response):
# return response
class NettackSMTPLib:
def smtp_brute_force(host, ports, usernames, passwords, timeout):
smtp_connection = smtplib.SMTP(host, int(ports), timeout=int(timeout))
smtp_connection.login(usernames, passwords)
smtp_connection.close()
return {
"host": host,
"username": usernames,
"password": passwords,
"port": ports
}
def smtps_brute_force(host, ports, usernames, passwords, timeout):
smtp_connection = smtplib.SMTP(host, int(ports), timeout=int(timeout))
smtp_connection.starttls()
smtp_connection.login(usernames, passwords)
smtp_connection.close()
return {
"host": host,
"username": usernames,
"password": passwords,
"port": ports
}
class Engine:
def run(
sub_step,
module_name,
target,
scan_unique_id,
options,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
):
backup_method = copy.deepcopy(sub_step['method'])
backup_response = copy.deepcopy(sub_step['response'])
del sub_step['method']
del sub_step['response']
if 'dependent_on_temp_event' in backup_response:
temp_event = get_dependent_results_from_database(
target,
module_name,
scan_unique_id,
backup_response['dependent_on_temp_event']
)
sub_step = replace_dependent_values(
sub_step,
temp_event
)
action = getattr(NettackSMTPLib, backup_method, None)
for _ in range(options['retries']):
try:
response = action(**sub_step)
break
except Exception as _:
response = []
sub_step['method'] = backup_method
sub_step['response'] = backup_response
sub_step['response']['conditions_results'] = response
# sub_step['response']['conditions_results'] = response_conditions_matched(sub_step, response)
return process_conditions(
sub_step,
module_name,
target,
scan_unique_id,
options,
response,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
)

View File

@ -1,282 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import socket
import copy
import re
import os
import select
import struct
import time
import ssl
from core.utility import reverse_and_regex_condition
from core.utility import process_conditions
from core.utility import get_dependent_results_from_database
from core.utility import replace_dependent_values
def response_conditions_matched(sub_step, response):
conditions = sub_step['response']['conditions']
condition_type = sub_step['response']['condition_type']
condition_results = {}
if sub_step['method'] == 'tcp_connect_only':
return response
if sub_step['method'] == 'tcp_connect_send_and_receive':
if response:
received_content = response['response']
for condition in conditions:
regex = re.findall(re.compile(conditions[condition]['regex']), received_content)
reverse = conditions[condition]['reverse']
condition_results[condition] = reverse_and_regex_condition(regex, reverse)
for condition in copy.deepcopy(condition_results):
if not condition_results[condition]:
del condition_results[condition]
if 'open_port' in condition_results and len(condition_results) > 1:
del condition_results['open_port']
del conditions['open_port']
if condition_type == 'and':
return condition_results if len(condition_results) == len(conditions) else []
if condition_type == 'or':
return condition_results if condition_results else []
return []
if sub_step['method'] == 'socket_icmp':
return response
return []
def create_tcp_socket(host, ports, timeout):
socket_connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
socket_connection.settimeout(timeout)
socket_connection.connect((host, int(ports)))
ssl_flag = False
try:
socket_connection = ssl.wrap_socket(socket_connection)
ssl_flag = True
except Exception:
socket_connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
socket_connection.settimeout(timeout)
socket_connection.connect((host, int(ports)))
return socket_connection, ssl_flag
class NettackerSocket:
def tcp_connect_only(host, ports, timeout):
socket_connection, ssl_flag = create_tcp_socket(host, ports, timeout)
peer_name = socket_connection.getpeername()
socket_connection.close()
return {
"peer_name": peer_name,
"service": socket.getservbyport(int(ports)),
"ssl_flag": ssl_flag
}
def tcp_connect_send_and_receive(host, ports, timeout):
socket_connection, ssl_flag = create_tcp_socket(host, ports, timeout)
peer_name = socket_connection.getpeername()
try:
socket_connection.send(b"ABC\x00\r\n\r\n\r\n" * 10)
response = socket_connection.recv(1024 * 1024 * 10)
socket_connection.close()
except Exception:
try:
socket_connection.close()
response = b""
except Exception:
response = b""
return {
"peer_name": peer_name,
"service": socket.getservbyport(int(ports)),
"response": response.decode(errors='ignore'),
"ssl_flag": ssl_flag
}
def socket_icmp(host, timeout):
"""
A pure python ping implementation using raw socket.
Note that ICMP messages can only be sent from processes running as root.
Derived from ping.c distributed in Linux's netkit. That code is
copyright (c) 1989 by The Regents of the University of California.
That code is in turn derived from code written by Mike Muuss of the
US Army Ballistic Research Laboratory in December, 1983 and
placed in the public domain. They have my thanks.
Bugs are naturally mine. I'd be glad to hear about them. There are
certainly word - size dependenceies here.
Copyright (c) Matthew Dixon Cowles, <http://www.visi.com/~mdc/>.
Distributable under the terms of the GNU General Public License
version 2. Provided with no warranties of any sort.
Original Version from Matthew Dixon Cowles:
-> ftp://ftp.visi.com/users/mdc/ping.py
Rewrite by Jens Diemer:
-> http://www.python-forum.de/post-69122.html#69122
Rewrite by George Notaras:
-> http://www.g-loaded.eu/2009/10/30/python-ping/
Fork by Pierre Bourdon:
-> http://bitbucket.org/delroth/python-ping/
Revision history
~~~~~~~~~~~~~~~~
November 22, 1997
-----------------
Initial hack. Doesn't do much, but rather than try to guess
what features I (or others) will want in the future, I've only
put in what I need now.
December 16, 1997
-----------------
For some reason, the checksum bytes are in the wrong order when
this is run under Solaris 2.X for SPARC but it works right under
Linux x86. Since I don't know just what's wrong, I'll swap the
bytes always and then do an htons().
December 4, 2000
----------------
Changed the struct.pack() calls to pack the checksum and ID as
unsigned. My thanks to Jerome Poincheval for the fix.
May 30, 2007
------------
little rewrite by Jens Diemer:
- change socket asterisk import to a normal import
- replace time.time() with time.clock()
- delete "return None" (or change to "return" only)
- in checksum() rename "str" to "source_string"
November 8, 2009
----------------
Improved compatibility with GNU/Linux systems.
Fixes by:
* George Notaras -- http://www.g-loaded.eu
Reported by:
* Chris Hallman -- http://cdhallman.blogspot.com
Changes in this release:
- Re-use time.time() instead of time.clock(). The 2007 implementation
worked only under Microsoft Windows. Failed on GNU/Linux.
time.clock() behaves differently under the two OSes[1].
[1] http://docs.python.org/library/time.html#time.clock
September 25, 2010
------------------
Little modifications by Georgi Kolev:
- Added quiet_ping function.
- returns percent lost packages, max round trip time, avrg round trip
time
- Added packet size to verbose_ping & quiet_ping functions.
- Bump up version to 0.2
------------------
5 Aug 2021 - Modified by Ali Razmjoo Qalaei (Reformat the code and more human readable)
"""
icmp_socket = socket.getprotobyname("icmp")
socket_connection = socket.socket(
socket.AF_INET,
socket.SOCK_RAW,
icmp_socket
)
random_integer = os.getpid() & 0xFFFF
icmp_echo_request = 8
# Make a dummy header with a 0 checksum.
dummy_checksum = 0
header = struct.pack("bbHHh", icmp_echo_request, 0, dummy_checksum, random_integer, 1)
data = struct.pack("d", time.time()) + struct.pack("d", time.time()) + str(
(76 - struct.calcsize("d")) * "Q"
).encode() # packet size = 76 (removed 8 bytes size of header)
source_string = header + data
# Calculate the checksum on the data and the dummy header.
calculate_data = 0
max_size = (len(source_string) / 2) * 2
counter = 0
while counter < max_size:
calculate_data += source_string[counter + 1] * 256 + source_string[counter]
calculate_data = calculate_data & 0xffffffff # Necessary?
counter += 2
if max_size < len(source_string):
calculate_data += source_string[len(source_string) - 1]
calculate_data = calculate_data & 0xffffffff # Necessary?
calculate_data = (calculate_data >> 16) + (calculate_data & 0xffff)
calculate_data = calculate_data + (calculate_data >> 16)
calculated_data = ~calculate_data & 0xffff
# Swap bytes. Bugger me if I know why.
dummy_checksum = calculated_data >> 8 | (calculated_data << 8 & 0xff00)
header = struct.pack(
"bbHHh", icmp_echo_request, 0, socket.htons(dummy_checksum), random_integer, 1
)
socket_connection.sendto(header + data, (socket.gethostbyname(host), 1)) # Don't know about the 1
while True:
started_select = time.time()
what_ready = select.select([socket_connection], [], [], timeout)
how_long_in_select = (time.time() - started_select)
if not what_ready[0]: # Timeout
break
time_received = time.time()
received_packet, address = socket_connection.recvfrom(1024)
icmp_header = received_packet[20:28]
packet_type, packet_code, packet_checksum, packet_id, packet_sequence = struct.unpack(
"bbHHh", icmp_header
)
if packet_id == random_integer:
packet_bytes = struct.calcsize("d")
time_sent = struct.unpack("d", received_packet[28:28 + packet_bytes])[0]
delay = time_received - time_sent
break
timeout = timeout - how_long_in_select
if timeout <= 0:
break
socket_connection.close()
return {
"host": host,
"response_time": delay,
"ssl_flag": False
}
class Engine:
def run(
sub_step,
module_name,
target,
scan_unique_id,
options,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
):
backup_method = copy.deepcopy(sub_step['method'])
backup_response = copy.deepcopy(sub_step['response'])
del sub_step['method']
del sub_step['response']
if 'dependent_on_temp_event' in backup_response:
temp_event = get_dependent_results_from_database(
target,
module_name,
scan_unique_id,
backup_response['dependent_on_temp_event']
)
sub_step = replace_dependent_values(
sub_step,
temp_event
)
action = getattr(NettackerSocket, backup_method, None)
for _ in range(options['retries']):
try:
response = action(**sub_step)
break
except Exception:
response = []
sub_step['method'] = backup_method
sub_step['response'] = backup_response
sub_step['response']['ssl_flag'] = response['ssl_flag'] if type(response) == dict else False
sub_step['response']['conditions_results'] = response_conditions_matched(sub_step, response)
return process_conditions(
sub_step,
module_name,
target,
scan_unique_id,
options,
response,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
)

View File

@ -1,94 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import copy
import paramiko
import logging
# from core.utility import reverse_and_regex_condition
from core.utility import process_conditions
from core.utility import get_dependent_results_from_database
from core.utility import replace_dependent_values
# def response_conditions_matched(sub_step, response):
# return response
class NettackSSHLib:
def ssh_brute_force(host, ports, usernames, passwords, timeout):
paramiko_logger = logging.getLogger("paramiko.transport")
paramiko_logger.disabled = True
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(
hostname=host,
port=int(ports),
timeout=int(timeout),
auth_strategy=paramiko.auth_strategy.Password(
username=usernames,
password_getter=lambda:passwords
) if passwords else paramiko.auth_strategy.NoneAuth(
username=usernames
),
)
ssh.close()
return {
"host": host,
"username": usernames,
"password": passwords,
"port": ports
}
class Engine:
def run(
sub_step,
module_name,
target,
scan_unique_id,
options,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
):
backup_method = copy.deepcopy(sub_step['method'])
backup_response = copy.deepcopy(sub_step['response'])
del sub_step['method']
del sub_step['response']
if 'dependent_on_temp_event' in backup_response:
temp_event = get_dependent_results_from_database(
target,
module_name,
scan_unique_id,
backup_response['dependent_on_temp_event']
)
sub_step = replace_dependent_values(
sub_step,
temp_event
)
action = getattr(NettackSSHLib, backup_method, None)
for _ in range(options['retries']):
try:
response = action(**sub_step)
break
except Exception:
response = []
sub_step['method'] = backup_method
sub_step['response'] = backup_response
sub_step['response']['conditions_results'] = response
# sub_step['response']['conditions_results'] = response_conditions_matched(sub_step, response)
return process_conditions(
sub_step,
module_name,
target,
scan_unique_id,
options,
response,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
)

View File

@ -1,83 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import copy
import telnetlib
# from core.utility import reverse_and_regex_condition
from core.utility import process_conditions
from core.utility import get_dependent_results_from_database
from core.utility import replace_dependent_values
# def response_conditions_matched(sub_step, response):
# return response
class NettackTelnetLib:
def telnet_brute_force(host, ports, usernames, passwords, timeout):
telnet_connection = telnetlib.Telnet(host, port=int(ports), timeout=int(timeout))
telnet_connection.read_until(b"login: ")
telnet_connection.write(usernames + "\n")
telnet_connection.read_until(b"Password: ")
telnet_connection.write(passwords + "\n")
telnet_connection.close()
return {
"host": host,
"username": usernames,
"password": passwords,
"port": ports
}
class Engine:
def run(
sub_step,
module_name,
target,
scan_unique_id,
options,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
):
backup_method = copy.deepcopy(sub_step['method'])
backup_response = copy.deepcopy(sub_step['response'])
del sub_step['method']
del sub_step['response']
if 'dependent_on_temp_event' in backup_response:
temp_event = get_dependent_results_from_database(
target,
module_name,
scan_unique_id,
backup_response['dependent_on_temp_event']
)
sub_step = replace_dependent_values(
sub_step,
temp_event
)
action = getattr(NettackTelnetLib, backup_method, None)
for _ in range(options['retries']):
try:
response = action(**sub_step)
break
except Exception as _:
response = []
sub_step['method'] = backup_method
sub_step['response'] = backup_response
sub_step['response']['conditions_results'] = response
# sub_step['response']['conditions_results'] = response_conditions_matched(sub_step, response)
return process_conditions(
sub_step,
module_name,
target,
scan_unique_id,
options,
response,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
)

View File

@ -1,28 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from core.scan_targets import start_scan_processes
from core.alert import info
from core.alert import write
from core.alert import messages
from core.load_modules import load_all_modules
from core.args_loader import load_all_args
from core.args_loader import check_all_required
def load():
"""
load all ARGS, Apply rules and go for attacks
Returns:
True if success otherwise None
"""
write("\n\n")
options = check_all_required(load_all_args())
info(messages("scan_started"))
info(messages("loaded_modules").format(len(load_all_modules())))
exit_code = start_scan_processes(options)
info(messages("done"))
return exit_code

View File

@ -1,121 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import numpy
import multiprocessing
from core.alert import (info,
verbose_event_info,
messages)
from core.targets import expand_targets
from core.utility import generate_random_token
from core.load_modules import perform_scan
from terminable_thread import Thread
from core.utility import wait_for_threads_to_finish
from core.graph import create_report
def parallel_scan_process(options, targets, scan_unique_id, process_number):
active_threads = []
verbose_event_info(messages("single_process_started").format(process_number))
total_number_of_modules = len(targets) * len(options.selected_modules)
total_number_of_modules_counter = 1
for target in targets:
for module_name in options.selected_modules:
thread = Thread(
target=perform_scan,
args=(
options,
target,
module_name,
scan_unique_id,
process_number,
total_number_of_modules_counter,
total_number_of_modules
)
)
thread.name = f"{target} -> {module_name}"
thread.start()
verbose_event_info(
messages("start_parallel_module_scan").format(
process_number,
module_name,
target,
total_number_of_modules_counter,
total_number_of_modules
)
)
total_number_of_modules_counter += 1
active_threads.append(thread)
if not wait_for_threads_to_finish(active_threads, options.parallel_module_scan, True):
return False
wait_for_threads_to_finish(active_threads, maximum=None, terminable=True)
return True
def multi_processor(options, scan_unique_id):
if not options.targets:
info(messages("no_live_service_found"))
return True
number_of_total_targets = len(options.targets)
options.targets = [
targets.tolist() for targets in numpy.array_split(
options.targets,
options.set_hardware_usage if options.set_hardware_usage <= len(options.targets)
else number_of_total_targets
)
]
info(messages("removing_old_db_records"))
from database.db import remove_old_logs
for target_group in options.targets:
for target in target_group:
for module_name in options.selected_modules:
remove_old_logs(
{
"target": target,
"module_name": module_name,
"scan_unique_id": scan_unique_id,
}
)
for _ in range(options.targets.count([])):
options.targets.remove([])
active_processes = []
info(
messages("start_multi_process").format(
number_of_total_targets,
len(options.targets)
)
)
process_number = 0
for targets in options.targets:
process_number += 1
process = multiprocessing.Process(
target=parallel_scan_process,
args=(options, targets, scan_unique_id, process_number,)
)
process.start()
active_processes.append(process)
return wait_for_threads_to_finish(active_processes, sub_process=True)
def start_scan_processes(options):
"""
preparing for attacks and managing multi-processing for host
Args:
options: all options
Returns:
True when it ends
"""
scan_unique_id = generate_random_token(32)
# find total number of targets + types + expand (subdomain, IPRanges, etc)
# optimize CPU usage
info(messages("regrouping_targets"))
options.targets = expand_targets(options, scan_unique_id)
if options.targets:
exit_code = multi_processor(options, scan_unique_id)
create_report(options, scan_unique_id)
else:
info(messages("no_live_service_found"))
exit_code = True
return exit_code

View File

@ -1,106 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import copy
import json
import os
from core.ip import (get_ip_range,
generate_ip_range,
is_single_ipv4,
is_ipv4_range,
is_ipv4_cidr,
is_single_ipv6,
is_ipv6_range,
is_ipv6_cidr)
from database.db import find_events
def filter_target_by_event(targets, scan_unique_id, module_name):
for target in copy.deepcopy(targets):
if not find_events(target, module_name, scan_unique_id):
targets.remove(target)
return targets
def expand_targets(options, scan_unique_id):
"""
analysis and calulcate targets.
Args:
options: all options
scan_unique_id: unique scan identifier
Returns:
a generator
"""
from core.scan_targets import multi_processor
targets = []
for target in options.targets:
if '://' in target:
# remove url proto; uri; port
target = target.split('://')[1].split('/')[0].split(':')[0]
targets.append(target)
# single IPs
elif is_single_ipv4(target) or is_single_ipv6(target):
if options.scan_ip_range:
targets += get_ip_range(target)
else:
targets.append(target)
# IP ranges
elif is_ipv4_range(target) or is_ipv6_range(target) or is_ipv4_cidr(target) or is_ipv6_cidr(target):
targets += generate_ip_range(target)
# domains probably
else:
targets.append(target)
options.targets = targets
# subdomain_scan
if options.scan_subdomains:
selected_modules = options.selected_modules
options.selected_modules = ['subdomain_scan']
multi_processor(
copy.deepcopy(options),
scan_unique_id
)
options.selected_modules = selected_modules
if 'subdomain_scan' in options.selected_modules:
options.selected_modules.remove('subdomain_scan')
for target in copy.deepcopy(options.targets):
for row in find_events(target, 'subdomain_scan', scan_unique_id):
for sub_domain in json.loads(row.json_event)['response']['conditions_results']['content']:
if sub_domain not in options.targets:
options.targets.append(sub_domain)
# icmp_scan
if options.ping_before_scan:
if os.geteuid() == 0:
selected_modules = options.selected_modules
options.selected_modules = ['icmp_scan']
multi_processor(
copy.deepcopy(options),
scan_unique_id
)
options.selected_modules = selected_modules
if 'icmp_scan' in options.selected_modules:
options.selected_modules.remove('icmp_scan')
options.targets = filter_target_by_event(targets, scan_unique_id, 'icmp_scan')
else:
from core.alert import warn
from core.alert import messages
warn(messages("icmp_need_root_access"))
if 'icmp_scan' in options.selected_modules:
options.selected_modules.remove('icmp_scan')
# port_scan
if not options.skip_service_discovery:
options.skip_service_discovery = True
selected_modules = options.selected_modules
options.selected_modules = ['port_scan']
multi_processor(
copy.deepcopy(options),
scan_unique_id
)
options.selected_modules = selected_modules
if 'port_scan' in options.selected_modules:
options.selected_modules.remove('port_scan')
options.targets = filter_target_by_event(targets, scan_unique_id, 'port_scan')
options.skip_service_discovery = False
return list(set(options.targets))

View File

@ -1,16 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import datetime
def now(model="%Y-%m-%d %H:%M:%S"):
"""
get now date and time
Args:
model: the date and time model, default is "%Y-%m-%d %H:%M:%S"
Returns:
the date and time of now
"""
return datetime.datetime.now().strftime(model) if model else datetime.datetime.now()

View File

@ -1,575 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import copy
import random
import string
import sys
import ctypes
import time
import json
import os
import multiprocessing
import yaml
import hashlib
import re
from core.load_modules import load_all_languages
from core.time import now
from core.color import color
def process_conditions(
event,
module_name,
target,
scan_unique_id,
options,
response,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
):
from core.alert import (success_event_info,
verbose_info,
messages)
if 'save_to_temp_events_only' in event.get('response', ''):
from database.db import submit_temp_logs_to_db
submit_temp_logs_to_db(
{
"date": now(model=None),
"target": target,
"module_name": module_name,
"scan_unique_id": scan_unique_id,
"event_name": event['response']['save_to_temp_events_only'],
"port": event.get('ports', ''),
"event": event,
"data": response
}
)
if event['response']['conditions_results'] and 'save_to_temp_events_only' not in event.get('response', ''):
from database.db import submit_logs_to_db
# remove sensitive information before submitting to db
from config import nettacker_api_config
options = copy.deepcopy(options)
for key in nettacker_api_config():
try:
del options[key]
except Exception:
continue
del event['response']['conditions']
del event['response']['condition_type']
if 'log' in event['response']:
del event['response']['log']
event_request_keys = copy.deepcopy(event)
del event_request_keys['response']
submit_logs_to_db(
{
"date": now(model=None),
"target": target,
"module_name": module_name,
"scan_unique_id": scan_unique_id,
"port": event.get('ports') or event.get('port') or (
event.get('url').split(':')[2].split('/')[0] if
type(event.get('url')) == str and len(event.get('url').split(':')) >= 3 and
event.get('url').split(':')[2].split('/')[0].isdigit() else ""
),
"event": " ".join(
yaml.dump(event_request_keys).split()
) + " conditions: " + " ".join(
yaml.dump(event['response']['conditions_results']).split()
),
"json_event": event
}
)
log_list = merge_logs_to_list(event['response']['conditions_results'])
if log_list:
success_event_info(
messages("send_success_event_from_module").format(
process_number,
module_name,
target,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests,
" ",
filter_large_content(
"\n".join(
[
color('purple') + key + color('reset')
for key in log_list
]
),
filter_rate=100000
)
)
)
else:
success_event_info(
messages("send_success_event_from_module").format(
process_number,
module_name,
target,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests,
" ".join(
[
color('yellow') + key + color('reset') if ':' in key
else color('green') + key + color('reset')
for key in yaml.dump(event_request_keys).split()
]
),
filter_large_content(
"conditions: " + " ".join(
[
color('purple') + key + color('reset') if ':' in key
else color('green') + key + color('reset')
for key in yaml.dump(event['response']['conditions_results']).split()
]
),
filter_rate=150
)
)
)
verbose_info(
json.dumps(event)
)
return True
else:
del event['response']['conditions']
verbose_info(
messages("send_unsuccess_event_from_module").format(
process_number,
module_name,
target,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests
)
)
verbose_info(
json.dumps(event)
)
return 'save_to_temp_events_only' in event['response']
def filter_large_content(content, filter_rate=150):
from core.alert import messages
if len(content) <= filter_rate:
return content
else:
filter_rate -= 1
filter_index = filter_rate
for char in content[filter_rate:]:
if char == ' ':
return content[0:filter_index] + messages('filtered_content')
else:
filter_index += 1
return content
def get_dependent_results_from_database(target, module_name, scan_unique_id, event_names):
from database.db import find_temp_events
events = []
for event_name in event_names.split(','):
while True:
event = find_temp_events(target, module_name, scan_unique_id, event_name)
if event:
events.append(json.loads(event.event)['response']['conditions_results'])
break
time.sleep(0.1)
return events
def find_and_replace_dependent_values(sub_step, dependent_on_temp_event):
if type(sub_step) == dict:
for key in copy.deepcopy(sub_step):
if type(sub_step[key]) not in [str, float, int, bytes]:
sub_step[key] = find_and_replace_dependent_values(
copy.deepcopy(sub_step[key]), dependent_on_temp_event
)
else:
if type(sub_step[key]) == str:
if 'dependent_on_temp_event' in sub_step[key]:
globals().update(locals())
generate_new_step = copy.deepcopy(sub_step[key])
key_name = re.findall(
re.compile("dependent_on_temp_event\\[\\S+\\]\\['\\S+\\]\\[\\S+\\]"),
generate_new_step
)[0]
try:
key_value = eval(key_name)
except Exception:
key_value = "error"
sub_step[key] = sub_step[key].replace(key_name, key_value)
if type(sub_step) == list:
value_index = 0
for value in copy.deepcopy(sub_step):
if type(sub_step[value_index]) not in [str, float, int, bytes]:
sub_step[key] = find_and_replace_dependent_values(
copy.deepcopy(sub_step[value_index]), dependent_on_temp_event
)
else:
if type(sub_step[value_index]) == str:
if 'dependent_on_temp_event' in sub_step[value_index]:
globals().update(locals())
generate_new_step = copy.deepcopy(sub_step[key])
key_name = re.findall(
re.compile("dependent_on_temp_event\\['\\S+\\]\\[\\S+\\]"),
generate_new_step
)[0]
try:
key_value = eval(key_name)
except Exception:
key_value = "error"
sub_step[value_index] = sub_step[value_index].replace(key_name, key_value)
value_index += 1
return sub_step
def replace_dependent_values(sub_step, dependent_on_temp_event):
return find_and_replace_dependent_values(sub_step, dependent_on_temp_event)
def replace_dependent_response(log, result):
response_dependent = result
if str(log):
key_name = re.findall(
re.compile("response_dependent\\['\\S+\\]"),
log
)
for i in key_name:
try:
key_value = eval(i)
except Exception:
key_value = "response dependent error"
log = log.replace(i, " ".join(key_value))
return log
def merge_logs_to_list(result, log_list=[]):
if type(result) == dict:
for i in result:
if 'log' == i:
log_list.append(result['log'])
else:
merge_logs_to_list(result[i], log_list)
return list(set(log_list))
def reverse_and_regex_condition(regex, reverse):
if regex:
if reverse:
return []
return list(set(regex))
else:
if reverse:
return True
return []
def select_maximum_cpu_core(mode):
if mode == 'maximum':
return int(multiprocessing.cpu_count() - 1) if int(multiprocessing.cpu_count() - 1) >= 1 else 1
elif mode == 'high':
return int(multiprocessing.cpu_count() / 2) if int(multiprocessing.cpu_count() - 1) >= 1 else 1
elif mode == 'normal':
return int(multiprocessing.cpu_count() / 4) if int(multiprocessing.cpu_count() - 1) >= 1 else 1
elif mode == 'low':
return int(multiprocessing.cpu_count() / 8) if int(multiprocessing.cpu_count() - 1) >= 1 else 1
else:
return 1
def wait_for_threads_to_finish(threads, maximum=None, terminable=False, sub_process=False):
while threads:
try:
dead_threads = []
for thread in threads:
if not thread.is_alive():
dead_threads.append(thread)
if dead_threads:
for thread in dead_threads:
threads.remove(thread)
dead_threads = []
if maximum and len(threads) < maximum:
break
time.sleep(0.01)
except KeyboardInterrupt:
if terminable:
for thread in threads:
terminate_thread(thread)
if sub_process:
for thread in threads:
thread.kill()
return False
return True
def terminate_thread(thread, verbose=True):
"""
kill a thread https://stackoverflow.com/a/15274929
Args:
thread: an alive thread
verbose: verbose mode/boolean
Returns:
True/None
"""
from core.alert import info
if verbose:
info("killing {0}".format(thread.name))
if not thread.is_alive():
return
exc = ctypes.py_object(SystemExit)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
ctypes.c_long(thread.ident),
exc
)
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("PyThreadState_SetAsyncExc failed")
return True
def find_args_value(args_name):
try:
return sys.argv[sys.argv.index(args_name) + 1]
except Exception:
return None
def application_language():
from config import nettacker_global_config
nettacker_global_configuration = nettacker_global_config()
if "-L" in sys.argv:
language = find_args_value('-L') or 'en'
elif "--language" in sys.argv:
language = find_args_value('--language') or 'en'
else:
language = nettacker_global_configuration['nettacker_user_application_config']['language']
if language not in load_all_languages():
language = 'en'
return language
def generate_random_token(length=10):
return "".join(
random.choice(string.ascii_lowercase) for _ in range(length)
)
def re_address_repeaters_key_name(key_name):
return "".join(['[\'' + _key + '\']' for _key in key_name.split('/')[:-1]])
def generate_new_sub_steps(sub_steps, data_matrix, arrays):
original_sub_steps = copy.deepcopy(sub_steps)
steps_array = []
for array in data_matrix:
array_name_position = 0
for array_name in arrays:
for sub_step in sub_steps:
exec(
"original_sub_steps{key_name} = {matrix_value}".format(
key_name=re_address_repeaters_key_name(array_name),
matrix_value='"' + str(array[array_name_position]) + '"' if type(
array[array_name_position]) == int or type(array[array_name_position]) == str else array[
array_name_position]
)
)
array_name_position += 1
steps_array.append(copy.deepcopy(original_sub_steps))
return steps_array
def find_repeaters(sub_content, root, arrays):
if type(sub_content) == dict and 'nettacker_fuzzer' not in sub_content:
temprory_content = copy.deepcopy(sub_content)
original_root = root
for key in sub_content:
root = original_root
root += key + '/'
temprory_content[key], _root, arrays = find_repeaters(sub_content[key], root, arrays)
sub_content = copy.deepcopy(temprory_content)
root = original_root
if (type(sub_content) not in [bool, int, float]) and (
type(sub_content) == list or 'nettacker_fuzzer' in sub_content):
arrays[root] = sub_content
return (sub_content, root, arrays) if root != '' else arrays
def find_and_replace_configuration_keys(module_content, module_inputs):
if type(module_content) == dict:
for key in copy.deepcopy(module_content):
if key in module_inputs:
if module_inputs[key]:
module_content[key] = module_inputs[key]
elif type(module_content[key]) in [dict, list]:
module_content[key] = find_and_replace_configuration_keys(module_content[key], module_inputs)
elif type(module_content) == list:
array_index = 0
for key in copy.deepcopy(module_content):
module_content[array_index] = find_and_replace_configuration_keys(key, module_inputs)
array_index += 1
else:
return module_content
return module_content
class value_to_class:
def __init__(self, value):
self.value = value
def class_to_value(arrays):
original_arrays = copy.deepcopy(arrays)
array_index = 0
for array in arrays:
value_index = 0
for value in array:
if type(value) == value_to_class:
original_arrays[array_index][value_index] = value.value
value_index += 1
array_index += 1
return original_arrays
def generate_and_replace_md5(content):
# todo: make it betetr and document it
md5_content = content.split('NETTACKER_MD5_GENERATOR_START')[1].split('NETTACKER_MD5_GENERATOR_STOP')[0]
md5_content_backup = md5_content
if type(md5_content) == str:
md5_content = md5_content.encode()
md5_hash = hashlib.md5(md5_content).hexdigest()
return content.replace(
'NETTACKER_MD5_GENERATOR_START' + md5_content_backup + 'NETTACKER_MD5_GENERATOR_STOP',
md5_hash
)
def arrays_to_matrix(arrays):
import numpy
return numpy.array(
numpy.meshgrid(*[
arrays[array_name] for array_name in arrays
])
).T.reshape(
-1,
len(arrays.keys())
).tolist()
def string_to_bytes(string):
return string.encode()
def fuzzer_function_read_file_as_array(filename):
from config import nettacker_paths
return open(
os.path.join(
nettacker_paths()['payloads_path'],
filename
)
).read().split('\n')
def apply_data_functions(data):
original_data = copy.deepcopy(data)
function_results = {}
globals().update(locals())
for data_name in data:
if type(data[data_name]) == str and data[data_name].startswith('fuzzer_function'):
exec(
"fuzzer_function = {fuzzer_function}".format(
fuzzer_function=data[data_name]
),
globals(),
function_results
)
original_data[data_name] = function_results['fuzzer_function']
return original_data
def nettacker_fuzzer_repeater_perform(arrays):
original_arrays = copy.deepcopy(arrays)
for array_name in arrays:
if 'nettacker_fuzzer' in arrays[array_name]:
data = arrays[array_name]['nettacker_fuzzer']['data']
data_matrix = arrays_to_matrix(apply_data_functions(data))
prefix = arrays[array_name]['nettacker_fuzzer']['prefix']
input_format = arrays[array_name]['nettacker_fuzzer']['input_format']
interceptors = copy.deepcopy(arrays[array_name]['nettacker_fuzzer']['interceptors'])
if interceptors:
interceptors = interceptors.split(',')
suffix = arrays[array_name]['nettacker_fuzzer']['suffix']
processed_array = []
for sub_data in data_matrix:
formatted_data = {}
index_input = 0
for value in sub_data:
formatted_data[list(data.keys())[index_input]] = value
index_input += 1
interceptors_function = ''
interceptors_function_processed = ''
if interceptors:
interceptors_function += 'interceptors_function_processed = '
for interceptor in interceptors[::-1]:
interceptors_function += '{interceptor}('.format(interceptor=interceptor)
interceptors_function += 'input_format.format(**formatted_data)' + str(
')' * interceptors_function.count('('))
expected_variables = {}
globals().update(locals())
exec(interceptors_function, globals(), expected_variables)
interceptors_function_processed = expected_variables['interceptors_function_processed']
else:
interceptors_function_processed = input_format.format(**formatted_data)
processed_sub_data = interceptors_function_processed
if prefix:
processed_sub_data = prefix + processed_sub_data
if suffix:
processed_sub_data = processed_sub_data + suffix
processed_array.append(copy.deepcopy(processed_sub_data))
original_arrays[array_name] = processed_array
return original_arrays
def expand_module_steps(content):
return [expand_protocol(x) for x in copy.deepcopy(content)]
def expand_protocol(protocol):
protocol['steps'] = [expand_step(x) for x in protocol['steps']]
return protocol
def expand_step(step):
arrays = nettacker_fuzzer_repeater_perform(find_repeaters(step, '', {}))
if arrays:
return generate_new_sub_steps(step, class_to_value(arrays_to_matrix(arrays)), arrays)
else:
# Minimum 1 step in array
return [step]
def sort_dictionary(dictionary):
etc_flag = '...' in dictionary
if etc_flag:
del dictionary['...']
sorted_dictionary = {}
for key in sorted(dictionary):
sorted_dictionary[key] = dictionary[key]
if etc_flag:
sorted_dictionary['...'] = {}
return sorted_dictionary

View File

@ -1,3 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
pass

View File

@ -1,57 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from sqlalchemy import create_engine
from config import nettacker_database_config
from database.models import Base
USER = nettacker_database_config()["USERNAME"]
PASSWORD = nettacker_database_config()["PASSWORD"]
HOST = nettacker_database_config()["HOST"]
PORT = nettacker_database_config()["PORT"]
DATABASE = nettacker_database_config()["DATABASE"]
def mysql_create_database():
"""
when using mysql database, this is the function that is used to create the database for the first time when you run
the nettacker module.
Args:
None
Returns:
True if success otherwise False
"""
try:
engine = create_engine('mysql://{0}:{1}@{2}:{3}'.format(USER, PASSWORD, HOST, PORT))
existing_databases = engine.execute("SHOW DATABASES;")
existing_databases = [
d[0] for d in existing_databases
]
if DATABASE not in existing_databases:
engine.execute("CREATE DATABASE {0} ".format(DATABASE))
return True
except Exception:
return False
def mysql_create_tables():
"""
when using mysql database, this is the function that is used to create the tables in the database for the first
time when you run the nettacker module.
Args:
None
Returns:
True if success otherwise False
"""
try:
db_engine = create_engine('mysql://{0}:{1}@{2}:{3}/{4}'.format(USER, PASSWORD, HOST, PORT, DATABASE))
Base.metadata.create_all(db_engine)
return True
except Exception:
return False

View File

@ -1,53 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from sqlalchemy import create_engine
from config import nettacker_database_config
from database.models import Base
from sqlalchemy.exc import OperationalError
USER = nettacker_database_config()["USERNAME"]
PASSWORD = nettacker_database_config()["PASSWORD"]
HOST = nettacker_database_config()["HOST"]
PORT = nettacker_database_config()["PORT"]
DATABASE = nettacker_database_config()["DATABASE"]
def postgres_create_database():
"""
when using postgres database, this is the function that is used to create the database for the first time when you
the nettacker run module.
Args:
None
Returns:
True if success otherwise False
"""
try:
engine = create_engine(
'postgres+psycopg2://{0}:{1}@{2}:{3}/{4}'.format(USER, PASSWORD, HOST, PORT, DATABASE)
)
Base.metadata.create_all(engine)
return True
except OperationalError:
# if the database does not exist
engine = create_engine(
"postgres+psycopg2://postgres:postgres@localhost/postgres")
conn = engine.connect()
conn.execute("commit")
conn.execute('CREATE DATABASE {0}'.format(DATABASE))
conn.close()
engine = create_engine(
'postgres+psycopg2://{0}:{1}@{2}:{3}/{4}'.format(
USER,
PASSWORD,
HOST,
PORT,
DATABASE
)
)
Base.metadata.create_all(engine)
except Exception:
return False

View File

@ -1,33 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from sqlalchemy import create_engine
from database.models import Base
from config import nettacker_database_config
DATABASE = nettacker_database_config()["DATABASE"]
def sqlite_create_tables():
"""
when using sqlite database, this is the function that is used to create the database schema for the first time when
you run the nettacker module.
Args:
None
Returns:
True if success otherwise False
"""
try:
db_engine = create_engine(
'sqlite:///{0}'.format(DATABASE),
connect_args={
'check_same_thread': False
}
)
Base.metadata.create_all(db_engine)
return True
except Exception:
return False

View File

@ -5,7 +5,7 @@ services:
build:
context: .
dockerfile: "Dockerfile"
command: python3 nettacker.py --start-api --api-host 0.0.0.0
command: poetry run python src/nettacker/run.py --start-api --api-host 0.0.0.0
ports:
- 5000:5000
volumes:

View File

@ -1,3 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
pass

View File

@ -1,3 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
pass

View File

@ -1,3 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
pass

View File

@ -1,94 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import json
import os
from core.alert import messages
def start(events):
"""
generate the d3_tree_v1_graph with events
Args:
events: all events
Returns:
a graph in HTML
"""
# define a normalised_json
normalisedjson = {
"name": "Started attack",
"children": {}
}
# get data for normalised_json
for event in events:
if event['target'] not in normalisedjson['children']:
normalisedjson['children'].update(
{
event['target']: {}
}
)
normalisedjson['children'][event['target']].update(
{
event['module_name']: []
}
)
if event['module_name'] not in normalisedjson['children'][event['target']]:
normalisedjson['children'][event['target']].update(
{
event['module_name']: []
}
)
normalisedjson['children'][event['target']][event['module_name']].append(
f"target: {event['target']}, module_name: {event['module_name']}, port: "
f"{event['port']}, event: {event['event']}"
)
# define a d3_structure_json
d3_structure = {
"name": "Starting attack",
"children": []
}
# get data for normalised_json
for target in list(normalisedjson['children'].keys()):
for module_name in list(normalisedjson['children'][target].keys()):
for description in normalisedjson["children"][target][module_name]:
children_array = [
{
"name": module_name,
"children": [
{
"name": description
}
]
}
]
d3_structure["children"].append(
{
"name": target,
"children": children_array
}
)
from config import nettacker_paths
data = open(
os.path.join(
nettacker_paths()['web_static_files_path'],
'report/d3_tree_v1.html'
)
).read().replace(
'__data_will_locate_here__',
json.dumps(d3_structure)
).replace(
'__title_to_replace__',
messages("pentest_graphs")
).replace(
'__description_to_replace__',
messages("graph_message")
).replace(
'__html_title_to_replace__',
messages("nettacker_report")
)
return data

View File

@ -1,3 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
pass

View File

@ -1,3 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
pass

View File

@ -1,40 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
from config import nettacker_paths
css_1 = open(
os.path.join(
nettacker_paths()['web_static_files_path'],
'report/html_table.css'
)
).read()
json_parse_js = open(
os.path.join(
nettacker_paths()['web_static_files_path'],
'report/json_parse.js'
)
).read()
table_title = open(
os.path.join(
nettacker_paths()['web_static_files_path'],
'report/table_title.html'
)
).read()
table_items = open(
os.path.join(
nettacker_paths()['web_static_files_path'],
'report/table_items.html'
)
).read()
table_end = open(
os.path.join(
nettacker_paths()['web_static_files_path'],
'report/table_end.html'
)
).read()

View File

@ -1,3 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
pass

View File

@ -1,3 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
pass

View File

@ -1,15 +0,0 @@
______ __ _____ _____
/ __ \ \ / /\ / ____| __ \
| | | \ \ /\ / / \ | (___ | |__) |
| | | |\ \/ \/ / /\ \ \___ \| ___/
| |__| | \ /\ / ____ \ ____) | | {2}Version {0}{3}
\____/ \/ \/_/ \_\_____/|_| {4}{1}{5}
_ _ _ _ _
| \ | | | | | | | |
{6}github.com/OWASP {7} | \| | ___| |_| |_ __ _ ___| | _____ _ __
{8}owasp.org{9} | . ` |/ _ \ __| __/ _` |/ __| |/ / _ \ '__|
{10}z3r0d4y.com{11} | |\ | __/ |_| || (_| | (__| < __/ |
|_| \_|\___|\__|\__\__,_|\___|_|\_\___|_|

View File

@ -1,18 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from core.compatible import check_dependencies
"""
entry point of OWASP Nettacker framework
"""
# __check_external_modules created to check requirements before load the engine
if __name__ == "__main__":
check_dependencies() # check for dependencies
# if dependencies and OS requirements are match then load the program
from core.parse import load
load() # load and parse the ARGV
# sys.exit(main())

1898
poetry.lock generated Normal file

File diff suppressed because it is too large Load Diff

95
pyproject.toml Normal file
View File

@ -0,0 +1,95 @@
[tool.poetry]
name = "nettacker"
version = "0.3.1-alpha.17"
description = "Automates information gathering, vulnerability scanning and aids penetration testing engagements in general"
license = "Apache-2.0"
readme = "README.md"
authors = ["OWASP Nettacker Contributors"]
classifiers = [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
]
keywords = [
"automation",
"bruteforce",
"cve",
"hacking-tools",
"information-gathering",
"network-analysis",
"owasp",
"penetration-testing",
"pentesting",
"pentesting-tools",
"port-scanner",
"python",
"security-tools",
"security",
"vulnerability-management",
"vulnerability-scanner",
]
homepage = "https://owasp.org/www-project-nettacker"
repository = "https://github.com/OWASP/Nettacker"
documentation = "https://github.com/OWASP/Nettacker/wiki"
packages = [{ include = "nettacker", from = "src" }]
[tool.poetry.dependencies]
python = "^3.9, <3.13"
aiohttp = "^3.9.5"
argparse = "^1.4.0"
asyncio = "^3.4.3"
flask = "^3.0.1"
ipaddr = "^2.2.0"
multiprocess = "^0.70.15"
netaddr = "^0.9.0"
numpy = "^1.26.1"
paramiko = "^3.4.0"
py3dns = "^4.0.0"
pyopenssl = "^23.2.0"
pysocks = "^1.7.1"
pyyaml = "^6.0.1"
requests = "^2.31.0"
sqlalchemy = "^2.0.22"
texttable = "^1.7.0"
[tool.poetry.group.dev.dependencies]
ipython = "^8.16.1"
ruff = "^0.2.1"
[tool.poetry.group.test.dependencies]
coverage = "^7.3.2"
pytest = "^7.4.3"
pytest-cov = "^4.1.0"
pytest-xdist = "^3.3.1"
[tool.poetry.urls]
"Sponsor" = "https://owasp.org/donate/?reponame=www-project-nettacker&title=OWASP+Nettacker"
[tool.coverage.run]
branch = true
[tool.isort]
known_first_party = ["nettacker", "tests"]
line_length = 99
multi_line_output = 3
no_inline_sort = true
profile = "black"
[tool.pytest.ini_options]
addopts = "--cov=src/nettacker --cov-config=pyproject.toml --cov-report term --cov-report xml --dist loadscope --no-cov-on-fail --numprocesses auto"
testpaths = ["tests"]
[tool.ruff]
line-length = 99
# [tool.ruff.lint]
# select = ["E4", "E7", "E9", "F", "T"]
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"

View File

@ -1,2 +0,0 @@
libssl-dev
gcc

View File

@ -1,2 +0,0 @@
flake8==7.0.0
ipython==8.18.1

View File

@ -1,16 +0,0 @@
argparse==1.4.0
netaddr==0.9.0
ipaddr==2.2.0
requests==2.31.0
aiohttp==3.9.5
asyncio==3.4.3
paramiko==3.4.0
texttable==1.6.7
PySocks==1.7.1 # library_name=socks # module name is not equal to socks name; this is required to be checked on startup
pyOpenSSL==23.2.0 # library_name=OpenSSL
flask==3.0.1
SQLAlchemy>=1.4.43 # library_name=sqlalchemy
py3DNS==4.0.0 # library_name=DNS
numpy==1.26.3
terminable_thread==0.7.1
PyYAML==6.0.1 # library_name=yaml

View File

View File

View File

@ -1,29 +1,11 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
from core.load_modules import load_all_modules, load_all_profiles
from core.load_modules import load_all_graphs
from core.alert import messages
from flask import abort
from config import nettacker_paths
def structure(status="", msg=""):
"""
basic JSON message structure
Args:
status: status (ok, failed)
msg: the message content
Returns:
a JSON message
"""
return {
"status": status,
"msg": msg
}
from nettacker.config import Config
from nettacker.core.app import Nettacker
from nettacker.core.messages import messages as _
from nettacker.core.messages import get_languages
def get_value(flask_request, key):
@ -37,13 +19,12 @@ def get_value(flask_request, key):
Returns:
the value content if found otherwise None
"""
return dict(
flask_request.args
).get(key) or dict(
flask_request.form
).get(key) or dict(
flask_request.cookies
).get(key) or ""
return (
dict(flask_request.args).get(key)
or dict(flask_request.form).get(key)
or dict(flask_request.cookies).get(key)
or ""
)
def mime_types():
@ -54,6 +35,9 @@ def mime_types():
all mime types in json
"""
return {
".3g2": "video/3gpp2",
".3gp": "video/3gpp",
".7z": "application/x-7z-compressed",
".aac": "audio/aac",
".abw": "application/x-abiword",
".arc": "application/octet-stream",
@ -90,8 +74,8 @@ def mime_types():
".ogv": "video/ogg",
".ogx": "application/ogg",
".otf": "font/otf",
".png": "image/png",
".pdf": "application/pdf",
".png": "image/png",
".ppt": "application/vnd.ms-powerpoint",
".pptx": "application/vnd.openxmlformats-officedocument.presentationml.presentation",
".rar": "application/x-rar-compressed",
@ -118,11 +102,8 @@ def mime_types():
".xml": "application/xml",
".xul": "application/vnd.mozilla.xul+xml",
".zip": "application/zip",
".3gp": "video/3gpp",
"audio/3gpp": "video",
".3g2": "video/3gpp2",
"audio/3gpp2": "video",
".7z": "application/x-7z-compressed"
}
@ -136,7 +117,7 @@ def get_file(filename):
Returns:
content of the file or abort(404)
"""
if not os.path.normpath(filename).startswith(nettacker_paths()["web_static_files_path"]):
if not os.path.normpath(filename).startswith(str(Config.path.web_static_dir)):
abort(404)
try:
return open(filename, "rb").read()
@ -159,7 +140,7 @@ def api_key_is_valid(app, flask_request):
"""
if app.config["OWASP_NETTACKER_CONFIG"]["api_access_key"] != get_value(flask_request, "key"):
abort(401, messages("API_invalid"))
abort(401, _("API_invalid"))
return
@ -170,40 +151,37 @@ def languages_to_country():
Returns:
HTML code for each language with its country flag
"""
from core.load_modules import load_all_languages
languages = load_all_languages()
languages = get_languages()
res = ""
flags = {
"ar": "sa",
"bn": "in",
"de": "de",
"el": "gr",
"fr": "fr",
"en": "us",
"es": "es",
"fa": "ir",
"fr": "fr",
"hi": "in",
"hy": "am",
"id": "id",
"it": "it",
"iw": "il",
"ja": "jp",
"ko": "kr",
"nl": "nl",
"ps": "ps",
"tr": "tr",
"de": "de",
"ko": "kr",
"it": "it",
"ja": "jp",
"fa": "ir",
"hy": "am",
"ar": "sa",
"zh-cn": "cn",
"vi": "vi",
"ru": "ru",
"hi": "in",
"ur": "pk",
"id": "id",
"es": "es",
"iw": "il",
"pt-br": "br",
"bn": "in"
"ru": "ru",
"tr": "tr",
"ur": "pk",
"vi": "vi",
"zh-cn": "cn",
}
for language in languages:
res += """<option {2} id="{0}" data-content='<span class="flag-icon flag-icon-{1}"
res += """<option {2} id="{0}" data-content='<span class="flag-icon flag-icon-{1}"
value="{0}"></span> {0}'></option>""".format(
language,
flags[language],
"selected" if language == "en" else ""
language, flags[language], "selected" if language == "en" else ""
)
return res
@ -215,11 +193,15 @@ def graphs():
Returns:
HTML content or available graphs
"""
res = """<label><input id="" type="radio" name="graph_name" value="" class="radio"><a
class="label label-default">None</a></label>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"""
for graph in load_all_graphs():
res += """<label><input id="{0}" type="radio" name="graph_name" value="{0}" class="radio"><a
class="label label-default">{0}</a></label>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;""".format(graph)
res = """
<label><input id="" type="radio" name="graph_name" value="" class="radio">
<a class="label label-default">None</a></label>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"""
for graph in Nettacker.load_graphs():
res += """
<label><input id="{0}" type="radio" name="graph_name" value="{0}" class="radio">
<a class="label label-default">{0}</a></label>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;""".format(
graph
)
return res
@ -231,16 +213,21 @@ def profiles():
HTML content or available profiles
"""
res = ""
for profile in sorted(load_all_profiles().keys()):
label = "success" if (
profile == "scan"
) else "warning" if (
profile == "brute"
) else "danger" if (
profile == "vulnerability"
) else "default"
res += """<label><input id="{0}" type="checkbox" class="checkbox checkbox-{0}"><a class="label
label-{1}">{0}</a></label>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;""".format(profile, label)
for profile in sorted(Nettacker.load_profiles().keys()):
label = (
"success"
if (profile == "scan")
else "warning"
if (profile == "brute")
else "danger"
if (profile == "vulnerability")
else "default"
)
res += """
<label><input id="{0}" type="checkbox" class="checkbox checkbox-{0}">
<a class="label label-{1}">{0}</a></label>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;""".format(
profile, label
)
return res
@ -251,24 +238,30 @@ def scan_methods():
Returns:
HTML content or available modules
"""
methods = load_all_modules()
methods = Nettacker.load_modules()
methods.pop("all")
res = ""
for sm in methods.keys():
label = "success" if sm.endswith(
"_scan"
) else "warning" if sm.endswith(
"_brute"
) else "danger" if sm.endswith(
"_vuln"
) else "default"
profile = "scan" if sm.endswith(
"_scan"
) else "brute" if sm.endswith(
"_brute"
) else "vuln" if sm.endswith(
"_vuln"
) else "default"
label = (
"success"
if sm.endswith("_scan")
else "warning"
if sm.endswith("_brute")
else "danger"
if sm.endswith("_vuln")
else "default"
)
profile = (
"scan"
if sm.endswith("_scan")
else "brute"
if sm.endswith("_brute")
else "vuln"
if sm.endswith("_vuln")
else "default"
)
res += """<label><input id="{0}" type="checkbox" class="checkbox checkbox-{2}-module">
<a class="label label-{1}">{0}</a></label>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;""".format(sm, label, profile)
<a class="label label-{1}">{0}</a></label>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;""".format(
sm, label, profile
)
return res

View File

@ -1,55 +1,53 @@
# !/usr/bin/env python
# -*- coding: utf-8 -*-
import multiprocessing
import time
import random
import csv
import json
import string
import multiprocessing
import os
import copy
import random
import string
import time
from types import SimpleNamespace
from database.db import create_connection, get_logs_by_scan_unique_id
from database.models import Report
from flask import Flask
from flask import jsonify
from flask import request as flask_request
from flask import render_template
from flask import abort
from flask import Response
from flask import make_response
from core.alert import write_to_api_console
from core.alert import messages
from core.die import die_success, die_failure
from core.time import now
from api.api_core import structure
from api.api_core import get_value
from api.api_core import get_file
from api.api_core import mime_types
from api.api_core import scan_methods
from api.api_core import profiles
from api.api_core import graphs
from api.api_core import languages_to_country
from api.api_core import api_key_is_valid
from database.db import select_reports
from database.db import get_scan_result
from database.db import last_host_logs
from database.db import logs_to_report_json
from database.db import search_logs
from database.db import logs_to_report_html
from config import nettacker_global_config
from core.scan_targets import start_scan_processes
from core.args_loader import check_all_required
app = Flask(
__name__,
template_folder=nettacker_global_config()['nettacker_paths']['web_static_files_path']
from api.core import (
get_value,
get_file,
mime_types,
scan_methods,
profiles,
graphs,
languages_to_country,
api_key_is_valid,
)
from core.utils.common import now
from flask import Flask, jsonify
from flask import request as flask_request
from flask import render_template, abort, Response, make_response
from nettacker import logger
from nettacker.api.helpers import structure
from nettacker.config import Config
from nettacker.core.app import Nettacker
from nettacker.core.die import die_failure
from nettacker.core.messages import messages as _
from nettacker.database.db import (
create_connection,
get_logs_by_scan_id,
select_reports,
get_scan_result,
last_host_logs,
logs_to_report_json,
search_logs,
logs_to_report_html,
)
from nettacker.database.models import Report
log = logger.get_logger()
app = Flask(__name__, template_folder=str(Config.path.web_static_dir))
app.config.from_object(__name__)
nettacker_application_config = nettacker_global_config()['nettacker_user_application_config']
nettacker_application_config.update(nettacker_global_config()['nettacker_api_config'])
del nettacker_application_config['api_access_key']
nettacker_application_config = Config.settings.as_dict()
nettacker_application_config.update(Config.api.as_dict())
del nettacker_application_config["api_access_key"]
@app.errorhandler(400)
@ -63,12 +61,7 @@ def error_400(error):
Returns:
400 JSON error
"""
return jsonify(
structure(
status="error",
msg=error.description
)
), 400
return jsonify(structure(status="error", msg=error.description)), 400
@app.errorhandler(401)
@ -82,12 +75,7 @@ def error_401(error):
Returns:
401 JSON error
"""
return jsonify(
structure(
status="error",
msg=error.description
)
), 401
return jsonify(structure(status="error", msg=error.description)), 401
@app.errorhandler(403)
@ -101,12 +89,7 @@ def error_403(error):
Returns:
403 JSON error
"""
return jsonify(
structure(
status="error",
msg=error.description
)
), 403
return jsonify(structure(status="error", msg=error.description)), 403
@app.errorhandler(404)
@ -120,12 +103,7 @@ def error_404(error):
Returns:
404 JSON error
"""
return jsonify(
structure(
status="error",
msg=messages("not_found")
)
), 404
return jsonify(structure(status="error", msg=_("not_found"))), 404
@app.before_request
@ -138,8 +116,11 @@ def limit_remote_addr():
"""
# IP Limitation
if app.config["OWASP_NETTACKER_CONFIG"]["api_client_whitelisted_ips"]:
if flask_request.remote_addr not in app.config["OWASP_NETTACKER_CONFIG"]["api_client_whitelisted_ips"]:
abort(403, messages("unauthorized_IP"))
if (
flask_request.remote_addr
not in app.config["OWASP_NETTACKER_CONFIG"]["api_client_whitelisted_ips"]
):
abort(403, _("unauthorized_IP"))
return
@ -155,12 +136,9 @@ def access_log(response):
the flask response
"""
if app.config["OWASP_NETTACKER_CONFIG"]["api_access_log"]:
log_request = open(
app.config["OWASP_NETTACKER_CONFIG"]["api_access_log"],
"ab"
)
log_request = open(app.config["OWASP_NETTACKER_CONFIG"]["api_access_log"], "ab")
log_request.write(
"{0} [{1}] {2} \"{3} {4}\" {5} {6} {7}\r\n".format(
'{0} [{1}] {2} "{3} {4}" {5} {6} {7}\r\n'.format(
flask_request.remote_addr,
now(),
flask_request.host,
@ -168,7 +146,7 @@ def access_log(response):
flask_request.full_path,
flask_request.user_agent,
response.status_code,
json.dumps(flask_request.form)
json.dumps(flask_request.form),
).encode()
)
log_request.close()
@ -188,16 +166,8 @@ def get_statics(path):
"""
static_types = mime_types()
return Response(
get_file(
os.path.join(
nettacker_global_config()['nettacker_paths']['web_static_files_path'],
path
)
),
mimetype=static_types.get(
os.path.splitext(path)[1],
"text/html"
)
get_file(os.path.join(Config.path.web_static_dir, path)),
mimetype=static_types.get(os.path.splitext(path)[1], "text/html"),
)
@ -209,15 +179,13 @@ def index():
Returns:
rendered HTML page
"""
from config import nettacker_user_application_config
filename = nettacker_user_application_config()["report_path_filename"]
return render_template(
"index.html",
selected_modules=scan_methods(),
profile=profiles(),
languages=languages_to_country(),
graphs=graphs(),
filename=filename
filename=Config.settings.report_path_filename,
)
@ -234,18 +202,13 @@ def new_scan():
for key in nettacker_application_config:
if key not in form_values:
form_values[key] = nettacker_application_config[key]
options = check_all_required(
None,
api_forms=SimpleNamespace(**copy.deepcopy(form_values))
)
app.config["OWASP_NETTACKER_CONFIG"]["options"] = options
new_process = multiprocessing.Process(target=start_scan_processes, args=(options,))
new_process.start()
return jsonify(
vars(
options
)
), 200
nettacker_app = Nettacker(api_arguments=SimpleNamespace(**form_values))
app.config["OWASP_NETTACKER_CONFIG"]["options"] = nettacker_app.arguments
# new_process = multiprocessing.Process(target=start_scan_processes, args=(options,))
# new_process.start()
nettacker_app.run()
return jsonify(vars(nettacker_app.arguments)), 200
@app.route("/session/check", methods=["GET"])
@ -257,12 +220,7 @@ def session_check():
a JSON message if it's valid otherwise abort(401)
"""
api_key_is_valid(app, flask_request)
return jsonify(
structure(
status="ok",
msg=messages("browser_session_valid")
)
), 200
return jsonify(structure(status="ok", msg=_("browser_session_valid"))), 200
@app.route("/session/set", methods=["GET", "POST"])
@ -275,14 +233,7 @@ def session_set():
response if success otherwise abort(403)
"""
api_key_is_valid(app, flask_request)
res = make_response(
jsonify(
structure(
status="ok",
msg=messages("browser_session_valid")
)
)
)
res = make_response(jsonify(structure(status="ok", msg=_("browser_session_valid"))))
res.set_cookie("key", value=app.config["OWASP_NETTACKER_CONFIG"]["api_access_key"])
return res
@ -296,14 +247,7 @@ def session_kill():
a 200 HTTP response with set-cookie to "expired"
to unset the cookie on the browser
"""
res = make_response(
jsonify(
structure(
status="ok",
msg=messages("browser_session_killed")
)
)
)
res = make_response(jsonify(structure(status="ok", msg=_("browser_session_killed"))))
res.set_cookie("key", "", expires=0)
return res
@ -320,11 +264,7 @@ def get_results():
page = get_value(flask_request, "page")
if not page:
page = 1
return jsonify(
select_reports(
int(page)
)
), 200
return jsonify(select_reports(int(page))), 200
@app.route("/results/get", methods=["GET"])
@ -338,22 +278,17 @@ def get_result_content():
api_key_is_valid(app, flask_request)
scan_id = get_value(flask_request, "id")
if not scan_id:
return jsonify(
structure(
status="error",
msg=messages("invalid_scan_id")
)
), 400
filename, file_content = get_scan_result(scan_id)
return jsonify(structure(status="error", msg=_("invalid_scan_id"))), 400
try:
filename, file_content = get_scan_result(scan_id)
except Exception:
return jsonify(structure(status="error", msg="database error!")), 500
return Response(
file_content,
mimetype=mime_types().get(
os.path.splitext(filename)[1],
"text/plain"
),
headers={
'Content-Disposition': 'attachment;filename=' + filename.split('/')[-1]
}
mimetype=mime_types().get(os.path.splitext(filename)[1], "text/plain"),
headers={"Content-Disposition": "attachment;filename=" + filename.split("/")[-1]},
)
@ -369,25 +304,14 @@ def get_results_json():
session = create_connection()
result_id = get_value(flask_request, "id")
if not result_id:
return jsonify(
structure(
status="error",
msg=messages("invalid_scan_id")
)
), 400
return jsonify(structure(status="error", msg=_("invalid_scan_id"))), 400
scan_details = session.query(Report).filter(Report.id == result_id).first()
json_object = json.dumps(
get_logs_by_scan_unique_id(
scan_details.scan_unique_id
)
)
filename = ".".join(scan_details.report_path_filename.split('.')[:-1])[1:] + '.json'
json_object = json.dumps(get_logs_by_scan_id(scan_details.scan_id))
filename = ".".join(scan_details.report_path_filename.split(".")[:-1])[1:] + ".json"
return Response(
json_object,
mimetype='application/json',
headers={
'Content-Disposition': 'attachment;filename=' + filename
}
mimetype="application/json",
headers={"Content-Disposition": "attachment;filename=" + filename},
)
@ -403,37 +327,22 @@ def get_results_csv(): # todo: need to fix time format
session = create_connection()
result_id = get_value(flask_request, "id")
if not result_id:
return jsonify(
structure(
status="error",
msg=messages("invalid_scan_id")
)
), 400
return jsonify(structure(status="error", msg=_("invalid_scan_id"))), 400
scan_details = session.query(Report).filter(Report.id == result_id).first()
data = get_logs_by_scan_unique_id(scan_details.scan_unique_id)
data = get_logs_by_scan_id(scan_details.scan_id)
keys = data[0].keys()
filename = ".".join(scan_details.report_path_filename.split('.')[:-1])[1:] + '.csv'
filename = ".".join(scan_details.report_path_filename.split(".")[:-1])[1:] + ".csv"
with open(filename, "w") as report_path_filename:
dict_writer = csv.DictWriter(
report_path_filename,
fieldnames=keys,
quoting=csv.QUOTE_ALL
)
dict_writer = csv.DictWriter(report_path_filename, fieldnames=keys, quoting=csv.QUOTE_ALL)
dict_writer.writeheader()
for event in data:
dict_writer.writerow(
{
key: value for key, value in event.items() if key in keys
}
)
with open(filename, 'r') as report_path_filename:
dict_writer.writerow({key: value for key, value in event.items() if key in keys})
with open(filename, "r") as report_path_filename:
reader = report_path_filename.read()
return Response(
reader,
mimetype='text/csv',
headers={
'Content-Disposition': 'attachment;filename=' + filename
}
mimetype="text/csv",
headers={"Content-Disposition": "attachment;filename=" + filename},
)
@ -449,11 +358,7 @@ def get_last_host_logs(): # need to check
page = get_value(flask_request, "page")
if not page:
page = 1
return jsonify(
last_host_logs(
int(page)
)
), 200
return jsonify(last_host_logs(int(page))), 200
@app.route("/logs/get_html", methods=["GET"])
@ -466,9 +371,7 @@ def get_logs_html(): # todo: check until here - ali
"""
api_key_is_valid(app, flask_request)
target = get_value(flask_request, "target")
return make_response(
logs_to_report_html(target)
)
return make_response(logs_to_report_html(target))
@app.route("/logs/get_json", methods=["GET"])
@ -483,17 +386,15 @@ def get_logs():
target = get_value(flask_request, "target")
data = logs_to_report_json(target)
json_object = json.dumps(data)
filename = "report-" + now(
model="%Y_%m_%d_%H_%M_%S"
) + "".join(
random.choice(string.ascii_lowercase) for _ in range(10)
filename = (
"report-"
+ now(format="%Y_%m_%d_%H_%M_%S")
+ "".join(random.choice(string.ascii_lowercase) for _ in range(10))
)
return Response(
json_object,
mimetype='application/json',
headers={
'Content-Disposition': 'attachment;filename=' + filename + '.json'
}
mimetype="application/json",
headers={"Content-Disposition": "attachment;filename=" + filename + ".json"},
)
@ -509,33 +410,22 @@ def get_logs_csv():
target = get_value(flask_request, "target")
data = logs_to_report_json(target)
keys = data[0].keys()
filename = "report-" + now(
model="%Y_%m_%d_%H_%M_%S"
) + "".join(
random.choice(
string.ascii_lowercase
) for _ in range(10)
filename = (
"report-"
+ now(format="%Y_%m_%d_%H_%M_%S")
+ "".join(random.choice(string.ascii_lowercase) for _ in range(10))
)
with open(filename, "w") as report_path_filename:
dict_writer = csv.DictWriter(
report_path_filename,
fieldnames=keys,
quoting=csv.QUOTE_ALL
)
dict_writer = csv.DictWriter(report_path_filename, fieldnames=keys, quoting=csv.QUOTE_ALL)
dict_writer.writeheader()
for event in data:
dict_writer.writerow(
{
key: value for key, value in event.items() if key in keys
}
)
with open(filename, 'r') as report_path_filename:
dict_writer.writerow({key: value for key, value in event.items() if key in keys})
with open(filename, "r") as report_path_filename:
reader = report_path_filename.read()
return Response(
reader, mimetype='text/csv',
headers={
'Content-Disposition': 'attachment;filename=' + filename + '.csv'
}
reader,
mimetype="text/csv",
headers={"Content-Disposition": f"attachment;filename={filename}.csv"},
)
@ -576,7 +466,7 @@ def start_api_subprocess(options):
"api_cert": options.api_cert,
"api_cert_key": options.api_cert_key,
"language": options.language,
"options": options
"options": options,
}
try:
if options.api_cert and options.api_cert_key:
@ -584,19 +474,16 @@ def start_api_subprocess(options):
host=options.api_hostname,
port=options.api_port,
debug=options.api_debug_mode,
ssl_context=(
options.api_cert,
options.api_cert_key
),
threaded=True
ssl_context=(options.api_cert, options.api_cert_key),
threaded=True,
)
else:
app.run(
host=options.api_hostname,
port=options.api_port,
debug=options.api_debug_mode,
ssl_context='adhoc',
threaded=True
ssl_context="adhoc",
threaded=True,
)
except Exception as e:
die_failure(str(e))
@ -610,16 +497,8 @@ def start_api_server(options):
options: all options
"""
# Starting the API
write_to_api_console(
messages("API_key").format(
options.api_port,
options.api_access_key
)
)
p = multiprocessing.Process(
target=start_api_subprocess,
args=(options,)
)
log.write_to_api_console(_("API_key").format(options.api_port, options.api_access_key))
p = multiprocessing.Process(target=start_api_subprocess, args=(options,))
p.start()
# Sometimes it's take much time to terminate flask with CTRL+C
# So It's better to use KeyboardInterrupt to terminate!
@ -630,4 +509,3 @@ def start_api_server(options):
for process in multiprocessing.active_children():
process.terminate()
break
die_success()

View File

@ -0,0 +1,12 @@
def structure(status="", msg=""):
"""
basic JSON message structure
Args:
status: status (ok, failed)
msg: the message content
Returns:
a JSON message
"""
return {"status": status, "msg": msg}

155
src/nettacker/config.py Normal file
View File

@ -0,0 +1,155 @@
import importlib.metadata
import inspect
from pathlib import Path
from nettacker.core.utils.common import now, generate_random_token
CWD = Path.cwd()
PACKAGE_PATH = Path(__file__).parent
def version_info():
"""
version information of the framework
Returns:
an array of version and code name
"""
return (
importlib.metadata.version("nettacker"),
open(PathConfig.release_name_file).read().strip(),
)
class ConfigBase:
@classmethod
def as_dict(cls):
return {attr_name: getattr(cls, attr_name) for attr_name in cls()}
def __init__(self) -> None:
self.attributes = sorted(
(
attribute[0]
for attribute in inspect.getmembers(self)
if not attribute[0].startswith("_") and not inspect.ismethod(attribute[1])
)
)
self.idx = 0
def __iter__(self):
yield from self.attributes
class ApiConfig(ConfigBase):
"""OWASP Nettacker API Default Configuration"""
api_access_log = str(CWD / ".data/nettacker.log")
api_access_key = generate_random_token(32)
api_client_whitelisted_ips = [] # disabled - to enable please put an array with list of ips/cidr/ranges
# [
# "127.0.0.1",
# "10.0.0.0/24",
# "192.168.1.1-192.168.1.255"
# ],
api_debug_mode = False
api_hostname = "0.0.0.0"
api_port = 5000
start_api_server = False
class DbConfig(ConfigBase):
"""
Database Config (could be modified by user)
For sqlite database:
fill the name of the DB as sqlite,
DATABASE as the name of the db user wants
other details can be left empty
For mysql users:
fill the name of the DB as mysql
DATABASE as the name of the database you want to create
USERNAME, PASSWORD, HOST and the PORT of the MySQL server
need to be filled respectively
"""
engine = "sqlite"
name = str(CWD / ".data/nettacker.db")
host = ""
port = ""
username = ""
password = ""
class PathConfig:
"""
home path for the framework (could be modify by user)
Returns:
a JSON contain the working, tmp and results path
"""
data_dir = CWD / ".data"
database_file = CWD / ".data/nettacker.db"
graph_dir = PACKAGE_PATH / "lib/graph"
home_dir = CWD
locale_dir = PACKAGE_PATH / "locale"
logo_file = PACKAGE_PATH / "logo.txt"
module_protocols_dir = PACKAGE_PATH / "core/lib"
modules_dir = PACKAGE_PATH / "modules"
payloads_dir = PACKAGE_PATH / "lib/payloads"
release_name_file = PACKAGE_PATH / "release_name.txt"
results_dir = CWD / ".data/results"
tmp_dir = CWD / ".data/tmp"
web_static_dir = PACKAGE_PATH / "web/static"
user_agents_file = PACKAGE_PATH / "lib/payloads/User-Agents/web_browsers_user_agents.txt"
class DefaultSettings(ConfigBase):
"""OWASP Nettacker Default Configuration"""
excluded_modules = None
graph_name = "d3_tree_v2_graph"
language = "en"
modules_extra_args = None
parallel_module_scan = 1
passwords = None
passwords_list = None
ping_before_scan = False
ports = None
profiles = None
report_path_filename = "{results_path}/results_{date_time}_{random_chars}.html".format(
results_path=PathConfig.results_dir,
date_time=now(format="%Y_%m_%d_%H_%M_%S"),
random_chars=generate_random_token(10),
)
retries = 1
scan_ip_range = False
scan_subdomains = False
selected_modules = None
set_hardware_usage = "maximum" # low, normal, high, maximum
show_all_modules = False
show_all_profiles = False
show_help_menu = False
show_version = False
skip_service_discovery = False
socks_proxy = None
targets = None
targets_list = None
thread_per_host = 100
time_sleep_between_requests = 0.0
timeout = 3.0
user_agent = "Nettacker {version_number} {version_code}".format(
version_number=version_info()[0], version_code=version_info()[1]
)
usernames = None
usernames_list = None
verbose_event = False
verbose_mode = False
class Config:
api = ApiConfig()
db = DbConfig()
path = PathConfig()
settings = DefaultSettings()

View File

334
src/nettacker/core/app.py Normal file
View File

@ -0,0 +1,334 @@
import copy
import json
import os
import socket
import sys
from threading import Thread
import multiprocess
import numpy
from database.mysql import mysql_create_database, mysql_create_tables
from database.postgresql import postgres_create_database
from database.sqlite import sqlite_create_tables
from nettacker import logger
from nettacker.config import Config, version_info
from nettacker.core.arg_parser import ArgParser
from nettacker.core.die import die_failure
from nettacker.core.graph import create_report
from nettacker.core.ip import (
get_ip_range,
generate_ip_range,
is_single_ipv4,
is_ipv4_range,
is_ipv4_cidr,
is_single_ipv6,
is_ipv6_range,
is_ipv6_cidr,
)
from nettacker.core.messages import messages as _
from nettacker.core.module import Module
from nettacker.core.socks_proxy import set_socks_proxy
from nettacker.core.utils import common as utils
from nettacker.core.utils.common import wait_for_threads_to_finish
from nettacker.database.db import find_events, remove_old_logs
from nettacker.logger import TerminalCodes
log = logger.get_logger()
class Nettacker(ArgParser):
def __init__(self, api_arguments=None):
self.print_logo()
self.check_dependencies()
log.info(_("scan_started"))
super().__init__(api_arguments=api_arguments)
@staticmethod
def print_logo():
"""
OWASP Nettacker Logo
"""
log.write_to_api_console(
open(Config.path.logo_file)
.read()
.format(
cyan=TerminalCodes.CYAN.value,
red=TerminalCodes.RED.value,
rst=TerminalCodes.RESET.value,
v1=version_info()[0],
v2=version_info()[1],
yellow=TerminalCodes.YELLOW.value,
)
)
log.reset_color()
def check_dependencies(self):
if sys.platform not in {"darwin", "linux"}:
die_failure(_("error_platform"))
if not os.path.exists(Config.path.home_dir):
try:
os.mkdir(Config.path.home_dir)
os.mkdir(Config.path.tmp_dir)
os.mkdir(Config.path.results_dir)
except Exception:
die_failure("cannot access the directory {0}".format(Config.path.home_dir))
if not os.path.exists(Config.path.tmp_dir):
try:
os.mkdir(Config.path.tmp_dir)
except Exception:
die_failure("cannot access the directory {0}".format(Config.path.tmp_dir))
if not os.path.exists(Config.path.results_dir):
try:
os.mkdir(Config.path.results_dir)
except Exception:
die_failure("cannot access the directory {0}".format(Config.path.results_dir))
if Config.db.engine == "sqlite":
try:
if not os.path.isfile(Config.path.database_file):
sqlite_create_tables()
except Exception:
die_failure("cannot access the directory {0}".format(Config.path.home_dir))
elif Config.db.engine == "mysql":
try:
mysql_create_database()
mysql_create_tables()
except Exception:
die_failure(_("database_connection_failed"))
elif Config.db.engine == "postgres":
try:
postgres_create_database()
except Exception:
die_failure(_("database_connection_failed"))
else:
die_failure(_("invalid_database"))
def expand_targets(self, scan_id):
"""
determine targets.
Args:
options: all options
scan_id: unique scan identifier
Returns:
a generator
"""
targets = []
for target in self.arguments.targets:
if "://" in target:
# remove url proto; uri; port
target = target.split("://")[1].split("/")[0].split(":")[0]
targets.append(target)
# single IPs
elif is_single_ipv4(target) or is_single_ipv6(target):
if self.arguments.scan_ip_range:
targets += get_ip_range(target)
else:
targets.append(target)
# IP ranges
elif (
is_ipv4_range(target)
or is_ipv6_range(target)
or is_ipv4_cidr(target)
or is_ipv6_cidr(target)
):
targets += generate_ip_range(target)
# domains probably
else:
targets.append(target)
self.arguments.targets = targets
# subdomain_scan
if self.arguments.scan_subdomains:
selected_modules = self.arguments.selected_modules
self.arguments.selected_modules = ["subdomain_scan"]
self.start_scan(scan_id)
self.arguments.selected_modules = selected_modules
if "subdomain_scan" in self.arguments.selected_modules:
self.arguments.selected_modules.remove("subdomain_scan")
for target in copy.deepcopy(self.arguments.targets):
for row in find_events(target, "subdomain_scan", scan_id):
for sub_domain in json.loads(row.json_event)["response"]["conditions_results"][
"content"
]:
if sub_domain not in self.arguments.targets:
self.arguments.targets.append(sub_domain)
# icmp_scan
if self.arguments.ping_before_scan:
if os.geteuid() == 0:
selected_modules = self.arguments.selected_modules
self.arguments.selected_modules = ["icmp_scan"]
self.start_scan(scan_id)
self.arguments.selected_modules = selected_modules
if "icmp_scan" in self.arguments.selected_modules:
self.arguments.selected_modules.remove("icmp_scan")
self.arguments.targets = self.filter_target_by_event(targets, scan_id, "icmp_scan")
else:
log.warn(_("icmp_need_root_access"))
if "icmp_scan" in self.arguments.selected_modules:
self.arguments.selected_modules.remove("icmp_scan")
# port_scan
if not self.arguments.skip_service_discovery:
self.arguments.skip_service_discovery = True
selected_modules = self.arguments.selected_modules
self.arguments.selected_modules = ["port_scan"]
self.start_scan(scan_id)
self.arguments.selected_modules = selected_modules
if "port_scan" in self.arguments.selected_modules:
self.arguments.selected_modules.remove("port_scan")
self.arguments.targets = self.filter_target_by_event(targets, scan_id, "port_scan")
self.arguments.skip_service_discovery = False
return list(set(self.arguments.targets))
def filter_target_by_event(self, targets, scan_id, module_name):
for target in copy.deepcopy(targets):
if not find_events(target, module_name, scan_id):
targets.remove(target)
return targets
def run(self):
"""
preparing for attacks and managing multi-processing for host
Args:
options: all options
Returns:
True when it ends
"""
scan_id = utils.generate_random_token(32)
log.info(_("regrouping_targets"))
# find total number of targets + types + expand (subdomain, IPRanges, etc)
# optimize CPU usage
self.arguments.targets = self.expand_targets(scan_id)
if not self.arguments.targets:
log.info(_("no_live_service_found"))
return True
exit_code = self.start_scan(scan_id)
create_report(self.arguments, scan_id)
log.info(_("done"))
return exit_code
def start_scan(self, scan_id):
number_of_total_targets = len(self.arguments.targets)
target_groups = [
targets.tolist()
for targets in numpy.array_split(
self.arguments.targets,
(
self.arguments.set_hardware_usage
if self.arguments.set_hardware_usage <= len(self.arguments.targets)
else number_of_total_targets
),
)
]
log.info(_("removing_old_db_records"))
for target_group in target_groups:
for target in target_group:
for module_name in self.arguments.selected_modules:
remove_old_logs(
{
"target": target,
"module_name": module_name,
"scan_id": scan_id,
}
)
for _i in range(target_groups.count([])):
target_groups.remove([])
log.info(_("start_multi_process").format(number_of_total_targets, len(target_groups)))
active_processes = []
for t_id, target_groups in enumerate(target_groups):
process = multiprocess.Process(
target=self.scan_target_group, args=(target_groups, scan_id, t_id)
)
process.start()
active_processes.append(process)
return wait_for_threads_to_finish(active_processes, sub_process=True)
def scan_target(
self,
target,
module_name,
scan_id,
process_number,
thread_number,
total_number_threads,
):
options = copy.deepcopy(self.arguments)
socket.socket, socket.getaddrinfo = set_socks_proxy(options.socks_proxy)
module = Module(
module_name,
options,
target,
scan_id,
process_number,
thread_number,
total_number_threads,
)
module.load()
module.generate_loops()
module.sort_loops()
module.start()
log.verbose_event_info(
_("finished_parallel_module_scan").format(
process_number, module_name, target, thread_number, total_number_threads
)
)
return os.EX_OK
def scan_target_group(self, targets, scan_id, process_number):
active_threads = []
log.verbose_event_info(_("single_process_started").format(process_number))
total_number_of_modules = len(targets) * len(self.arguments.selected_modules)
total_number_of_modules_counter = 1
for target in targets:
for module_name in self.arguments.selected_modules:
thread = Thread(
target=self.scan_target,
args=(
target,
module_name,
scan_id,
process_number,
total_number_of_modules_counter,
total_number_of_modules,
),
)
thread.name = f"{target} -> {module_name}"
thread.start()
log.verbose_event_info(
_("start_parallel_module_scan").format(
process_number,
module_name,
target,
total_number_of_modules_counter,
total_number_of_modules,
)
)
total_number_of_modules_counter += 1
active_threads.append(thread)
if not wait_for_threads_to_finish(
active_threads, self.arguments.parallel_module_scan, True
):
return False
wait_for_threads_to_finish(active_threads, maximum=None, terminable=True)
return True

View File

@ -0,0 +1,699 @@
import json
import sys
from argparse import ArgumentParser
import yaml
from nettacker.config import version_info, Config
from nettacker.core.die import die_failure, die_success
from nettacker.core.ip import (
is_single_ipv4,
is_single_ipv6,
is_ipv4_cidr,
is_ipv6_range,
is_ipv6_cidr,
is_ipv4_range,
generate_ip_range,
)
from nettacker.core.messages import messages as _
from nettacker.core.template import TemplateLoader
from nettacker.core.utils import common as utils
from nettacker.logger import TerminalCodes, get_logger
log = get_logger()
class ArgParser(ArgumentParser):
def __init__(self, api_arguments=None) -> None:
super().__init__(prog="Nettacker", add_help=False)
self.api_arguments = api_arguments
self.graphs = self.load_graphs()
self.languages = self.load_languages()
self.modules = self.load_modules(full_details=True)
log.info(_("loaded_modules").format(len(self.modules)))
self.profiles = self.load_profiles()
self.add_arguments()
self.parse_arguments()
@staticmethod
def load_graphs():
"""
load all available graphs
Returns:
an array of graph names
"""
graph_names = []
for graph_library in Config.path.graph_dir.glob("*/engine.py"):
graph_names.append(str(graph_library).split("/")[-2] + "_graph")
return list(set(graph_names))
@staticmethod
def load_languages():
"""
Get available languages
Returns:
an array of languages
"""
languages_list = []
for language in Config.path.locale_dir.glob("*.yaml"):
languages_list.append(str(language).split("/")[-1].split(".")[0])
return list(set(languages_list))
@staticmethod
def load_modules(limit=-1, full_details=False):
"""
load all available modules
limit: return limited number of modules
full: with full details
Returns:
an array of all module names
"""
# Search for Modules
module_names = {}
for module_name in sorted(Config.path.modules_dir.glob("**/*.yaml")):
library = str(module_name).split("/")[-1].split(".")[0]
category = str(module_name).split("/")[-2]
module = f"{library}_{category}"
contents = yaml.safe_load(TemplateLoader(module).open().split("payload:")[0])
module_names[module] = contents["info"] if full_details else None
if len(module_names) == limit:
module_names["..."] = {}
break
module_names = utils.sort_dictionary(module_names)
module_names["all"] = {}
return module_names
@staticmethod
def load_profiles(limit=-1):
"""
load all available profiles
Returns:
an array of all profile names
"""
all_modules_with_details = ArgParser.load_modules(full_details=True).copy()
profiles = {}
if "..." in all_modules_with_details:
del all_modules_with_details["..."]
del all_modules_with_details["all"]
for key in all_modules_with_details:
for tag in all_modules_with_details[key]["profiles"]:
if tag not in profiles:
profiles[tag] = []
profiles[tag].append(key)
else:
profiles[tag].append(key)
if len(profiles) == limit:
profiles = utils.sort_dictionary(profiles)
profiles["..."] = []
profiles["all"] = []
return profiles
profiles = utils.sort_dictionary(profiles)
profiles["all"] = []
return profiles
def add_arguments(self):
# Engine Options
engine_options = self.add_argument_group(_("engine"), _("engine_input"))
engine_options.add_argument(
"-L",
"--language",
action="store",
dest="language",
default=Config.settings.language,
help=_("select_language").format(self.languages),
)
engine_options.add_argument(
"-v",
"--verbose",
action="store_true",
dest="verbose_mode",
default=Config.settings.verbose_mode,
help=_("verbose_mode"),
)
engine_options.add_argument(
"--verbose-event",
action="store_true",
dest="verbose_event",
default=Config.settings.verbose_event,
help=_("verbose_event"),
)
engine_options.add_argument(
"-V",
"--version",
action="store_true",
default=Config.settings.show_version,
dest="show_version",
help=_("software_version"),
)
engine_options.add_argument(
"-o",
"--output",
action="store",
default=Config.settings.report_path_filename,
dest="report_path_filename",
help=_("save_logs"),
)
engine_options.add_argument(
"--graph",
action="store",
default=Config.settings.graph_name,
dest="graph_name",
help=_("available_graph").format(self.graphs),
)
engine_options.add_argument(
"-h",
"--help",
action="store_true",
default=Config.settings.show_help_menu,
dest="show_help_menu",
help=_("help_menu"),
)
# Target Options
target_options = self.add_argument_group(_("target"), _("target_input"))
target_options.add_argument(
"-i",
"--targets",
action="store",
dest="targets",
default=Config.settings.targets,
help=_("target_list"),
)
target_options.add_argument(
"-l",
"--targets-list",
action="store",
dest="targets_list",
default=Config.settings.targets_list,
help=_("read_target"),
)
# Exclude Module Name
exclude_modules = sorted(self.modules.keys())[:10]
exclude_modules.remove("all")
# Method Options
method_options = self.add_argument_group(_("Method"), _("scan_method_options"))
method_options.add_argument(
"-m",
"--modules",
action="store",
dest="selected_modules",
default=Config.settings.selected_modules,
help=_("choose_scan_method").format(list(self.modules.keys())[:10]),
)
method_options.add_argument(
"--modules-extra-args",
action="store",
dest="modules_extra_args",
default=Config.settings.modules_extra_args,
help=_("modules_extra_args_help"),
)
method_options.add_argument(
"--show-all-modules",
action="store_true",
dest="show_all_modules",
default=Config.settings.show_all_modules,
help=_("show_all_modules"),
)
method_options.add_argument(
"--profile",
action="store",
default=Config.settings.profiles,
dest="profiles",
help=_("select_profile").format(list(self.profiles.keys())[:10]),
)
method_options.add_argument(
"--show-all-profiles",
action="store_true",
dest="show_all_profiles",
default=Config.settings.show_all_profiles,
help=_("show_all_profiles"),
)
method_options.add_argument(
"-x",
"--exclude-modules",
action="store",
dest="excluded_modules",
default=Config.settings.excluded_modules,
help=_("exclude_scan_method").format(exclude_modules),
)
method_options.add_argument(
"-u",
"--usernames",
action="store",
dest="usernames",
default=Config.settings.usernames,
help=_("username_list"),
)
method_options.add_argument(
"-U",
"--users-list",
action="store",
dest="usernames_list",
default=Config.settings.usernames_list,
help=_("username_from_file"),
)
method_options.add_argument(
"-p",
"--passwords",
action="store",
dest="passwords",
default=Config.settings.passwords,
help=_("password_separator"),
)
method_options.add_argument(
"-P",
"--passwords-list",
action="store",
dest="passwords_list",
default=Config.settings.passwords_list,
help=_("read_passwords"),
)
method_options.add_argument(
"-g",
"--ports",
action="store",
dest="ports",
default=Config.settings.ports,
help=_("port_separator"),
)
method_options.add_argument(
"--user-agent",
action="store",
dest="user_agent",
default=Config.settings.user_agent,
help=_("select_user_agent"),
)
method_options.add_argument(
"-T",
"--timeout",
action="store",
dest="timeout",
default=Config.settings.timeout,
type=float,
help=_("read_passwords"),
)
method_options.add_argument(
"-w",
"--time-sleep-between-requests",
action="store",
dest="time_sleep_between_requests",
default=Config.settings.time_sleep_between_requests,
type=float,
help=_("time_to_sleep"),
)
method_options.add_argument(
"-r",
"--range",
action="store_true",
default=Config.settings.scan_ip_range,
dest="scan_ip_range",
help=_("range"),
)
method_options.add_argument(
"-s",
"--sub-domains",
action="store_true",
default=Config.settings.scan_subdomains,
dest="scan_subdomains",
help=_("subdomains"),
)
method_options.add_argument(
"--skip-service-discovery",
action="store_true",
default=Config.settings.skip_service_discovery,
dest="skip_service_discovery",
help=_("skip_service_discovery"),
)
method_options.add_argument(
"-t",
"--thread-per-host",
action="store",
default=Config.settings.thread_per_host,
type=int,
dest="thread_per_host",
help=_("thread_number_connections"),
)
method_options.add_argument(
"-M",
"--parallel-module-scan",
action="store",
default=Config.settings.parallel_module_scan,
type=int,
dest="parallel_module_scan",
help=_("thread_number_modules"),
)
method_options.add_argument(
"--set-hardware-usage",
action="store",
dest="set_hardware_usage",
default=Config.settings.set_hardware_usage,
help=_("set_hardware_usage"),
)
method_options.add_argument(
"-R",
"--socks-proxy",
action="store",
dest="socks_proxy",
default=Config.settings.socks_proxy,
help=_("outgoing_proxy"),
)
method_options.add_argument(
"--retries",
action="store",
dest="retries",
type=int,
default=Config.settings.retries,
help=_("connection_retries"),
)
method_options.add_argument(
"--ping-before-scan",
action="store_true",
dest="ping_before_scan",
default=Config.settings.ping_before_scan,
help=_("ping_before_scan"),
)
# API Options
api_options = self.add_argument_group(_("API"), _("API_options"))
api_options.add_argument(
"--start-api",
action="store_true",
dest="start_api_server",
default=Config.api.start_api_server,
help=_("start_api_server"),
)
api_options.add_argument(
"--api-host",
action="store",
dest="api_hostname",
default=Config.api.api_hostname,
help=_("API_host"),
)
api_options.add_argument(
"--api-port",
action="store",
dest="api_port",
default=Config.api.api_port,
help=_("API_port"),
)
api_options.add_argument(
"--api-debug-mode",
action="store_true",
dest="api_debug_mode",
default=Config.api.api_debug_mode,
help=_("API_debug"),
)
api_options.add_argument(
"--api-access-key",
action="store",
dest="api_access_key",
default=Config.api.api_access_key,
help=_("API_access_key"),
)
api_options.add_argument(
"--api-client-whitelisted-ips",
action="store",
dest="api_client_whitelisted_ips",
default=Config.api.api_client_whitelisted_ips,
help=_("define_white_list"),
)
api_options.add_argument(
"--api-access-log",
action="store",
dest="api_access_log",
default=Config.api.api_access_log,
help=_("API_access_log_file"),
)
api_options.add_argument(
"--api-cert",
action="store",
dest="api_cert",
help=_("API_cert"),
)
api_options.add_argument(
"--api-cert-key",
action="store",
dest="api_cert_key",
help=_("API_cert_key"),
)
def parse_arguments(self):
"""
check all rules and requirements for ARGS
Args:
api_forms: values from nettacker.api
Returns:
all ARGS with applied rules
"""
# Checking Requirements
options = self.api_arguments or self.parse_args()
if options.language not in self.languages:
die_failure("Please select one of these languages {0}".format(self.languages))
# Check Help Menu
if options.show_help_menu:
self.print_help()
log.write("\n\n")
log.write(_("license"))
die_success()
# Check version
if options.show_version:
log.info(
_("current_version").format(
TerminalCodes.YELLOW.value,
version_info()[0],
TerminalCodes.RESET.value,
TerminalCodes.CYAN.value,
version_info()[1],
TerminalCodes.RESET.value,
TerminalCodes.GREEN.value,
)
)
die_success()
if options.show_all_modules:
log.info(_("loading_modules"))
for module in self.modules:
log.info(
_("module_profile_full_information").format(
TerminalCodes.CYAN.value,
module,
TerminalCodes.GREEN.value,
", ".join(
[
"{key}: {value}".format(key=key, value=self.modules[module][key])
for key in self.modules[module]
]
),
)
)
die_success()
if options.show_all_profiles:
log.info(_("loading_profiles"))
for profile in self.profiles:
log.info(
_("module_profile_full_information").format(
TerminalCodes.CYAN.value,
profile,
TerminalCodes.GREEN.value,
", ".join(self.profiles[profile]),
)
)
die_success()
# API mode
if options.start_api_server:
if "--start-api" in sys.argv and self.api_arguments:
die_failure(_("cannot_run_api_server"))
from nettacker.api.engine import start_api_server
if options.api_client_whitelisted_ips:
if isinstance(options.api_client_whitelisted_ips, str):
options.api_client_whitelisted_ips = options.api_client_whitelisted_ips.split(
","
)
whitelisted_ips = []
for ip in options.api_client_whitelisted_ips:
if is_single_ipv4(ip) or is_single_ipv6(ip):
whitelisted_ips.append(ip)
elif (
is_ipv4_range(ip)
or is_ipv6_range(ip)
or is_ipv4_cidr(ip)
or is_ipv6_cidr(ip)
):
whitelisted_ips += generate_ip_range(ip)
options.api_client_whitelisted_ips = whitelisted_ips
start_api_server(options)
# Check the target(s)
if not (options.targets or options.targets_list) or (
options.targets and options.targets_list
):
# self.print_help()
# write("\n")
die_failure(_("error_target"))
if options.targets:
options.targets = list(set(options.targets.split(",")))
if options.targets_list:
try:
options.targets = list(
set(open(options.targets_list, "rb").read().decode().split())
)
except Exception:
die_failure(_("error_target_file").format(options.targets_list))
# check for modules
if not (options.selected_modules or options.profiles):
die_failure(_("scan_method_select"))
if options.selected_modules:
if options.selected_modules == "all":
options.selected_modules = list(set(self.modules.keys()))
options.selected_modules.remove("all")
else:
options.selected_modules = list(set(options.selected_modules.split(",")))
for module_name in options.selected_modules:
if module_name not in self.modules:
die_failure(_("scan_module_not_found").format(module_name))
if options.profiles:
if not options.selected_modules:
options.selected_modules = []
if options.profiles == "all":
options.selected_modules = list(set(self.modules.keys()))
options.selected_modules.remove("all")
else:
options.profiles = list(set(options.profiles.split(",")))
for profile in options.profiles:
if profile not in self.profiles:
die_failure(_("profile_404").format(profile))
for module_name in self.profiles[profile]:
if module_name not in options.selected_modules:
options.selected_modules.append(module_name)
# threading & processing
if options.set_hardware_usage not in {"low", "normal", "high", "maximum"}:
die_failure(_("wrong_hardware_usage"))
options.set_hardware_usage = utils.select_maximum_cpu_core(options.set_hardware_usage)
options.thread_per_host = int(options.thread_per_host)
if not options.thread_per_host >= 1:
options.thread_per_host = 1
options.parallel_module_scan = int(options.parallel_module_scan)
if not options.parallel_module_scan >= 1:
options.parallel_module_scan = 1
# Check for excluding modules
if options.excluded_modules:
options.excluded_modules = options.excluded_modules.split(",")
if "all" in options.excluded_modules:
die_failure(_("error_exclude_all"))
for excluded_module in options.excluded_modules:
if excluded_module in options.selected_modules:
del options.selected_modules[excluded_module]
# Check port(s)
if options.ports:
tmp_ports = []
for port in options.ports.split(","):
try:
if "-" in port:
for port_number in range(
int(port.split("-")[0]), int(port.split("-")[1]) + 1
):
if port_number not in tmp_ports:
tmp_ports.append(port_number)
else:
if int(port) not in tmp_ports:
tmp_ports.append(int(port))
except Exception:
die_failure(_("ports_int"))
options.ports = tmp_ports
if options.user_agent == "random_user_agent":
options.user_agents = open(Config.path.user_agents_file).read().split("\n")
# Check user list
if options.usernames:
options.usernames = list(set(options.usernames.split(",")))
elif options.usernames_list:
try:
options.usernames = list(set(open(options.usernames_list).read().split("\n")))
except Exception:
die_failure(_("error_username").format(options.usernames_list))
# Check password list
if options.passwords:
options.passwords = list(set(options.passwords.split(",")))
elif options.passwords_list:
try:
options.passwords = list(set(open(options.passwords_list).read().split("\n")))
except Exception:
die_failure(_("error_passwords").format(options.passwords_list))
# Check output file
try:
temp_file = open(options.report_path_filename, "w")
temp_file.close()
except Exception:
die_failure(_("file_write_error").format(options.report_path_filename))
# Check Graph
if options.graph_name:
if options.graph_name not in self.graphs:
die_failure(_("graph_module_404").format(options.graph_name))
if not (
options.report_path_filename.endswith(".html")
or options.report_path_filename.endswith(".htm")
):
log.warn(_("graph_output"))
options.graph_name = None
# check modules extra args
if options.modules_extra_args:
all_args = {}
for args in options.modules_extra_args.split("&"):
value = args.split("=")[1]
if value.lower() == "true":
value = True
elif value.lower() == "false":
value = False
elif "." in value:
try:
value = float(value)
except Exception:
pass
elif "{" in value or "[" in value:
try:
value = json.loads(value)
except Exception:
pass
else:
try:
value = int(value)
except Exception:
pass
all_args[args.split("=")[0]] = value
options.modules_extra_args = all_args
options.timeout = float(options.timeout)
options.time_sleep_between_requests = float(options.time_sleep_between_requests)
options.retries = int(options.retries)
self.arguments = options

View File

@ -1,15 +1,15 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
from nettacker import logger
log = logger.get_logger()
def die_success():
"""
exit the framework with code 0
"""
from core.color import reset_color
reset_color()
log.reset_color()
sys.exit(0)
@ -20,8 +20,7 @@ def die_failure(msg):
Args:
msg: the error message
"""
from core.color import reset_color
from core.alert import error
error(msg)
reset_color()
log.error(msg)
log.reset_color()
sys.exit(1)

View File

@ -0,0 +1,5 @@
from nettacker.config import Config
def read_from_file(file_path):
return open(Config.path.payloads_dir / file_path).read().split("\n")

170
src/nettacker/core/graph.py Normal file
View File

@ -0,0 +1,170 @@
import csv
import html
import importlib
import json
from datetime import datetime
import texttable
from core.utils.common import merge_logs_to_list, now
from nettacker import logger
from nettacker.config import version_info
from nettacker.core.die import die_failure
from nettacker.core.messages import messages as _
from nettacker.database.db import get_logs_by_scan_id, submit_report_to_db
log = logger.get_logger()
def build_graph(graph_name, events):
"""
build a graph
Args:
graph_name: graph name
events: list of events
Returns:
graph in HTML type
"""
log.info(_("build_graph"))
try:
start = getattr(
importlib.import_module(
f"nettacker.lib.graph.{graph_name.rsplit('_graph')[0]}.engine"
),
"start",
)
except ModuleNotFoundError:
die_failure(_("graph_module_unavailable").format(graph_name))
log.info(_("finish_build_graph"))
return start(events)
def build_text_table(events):
"""
value['date'], value["target"], value['module_name'], value['scan_id'],
value['options'], value['event']
build a text table with generated event related to the scan
:param events: all events
:return:
array [text table, event_number]
"""
_table = texttable.Texttable()
table_headers = ["date", "target", "module_name", "port", "logs"]
_table.add_rows([table_headers])
for event in events:
log = merge_logs_to_list(json.loads(event["json_event"]), [])
_table.add_rows(
[
table_headers,
[
event["date"],
event["target"],
event["module_name"],
str(event["port"]),
"\n".join(log) if log else "Detected",
],
]
)
return (
_table.draw()
+ "\n\n"
+ _("nettacker_version_details").format(version_info()[0], version_info()[1], now())
+ "\n"
)
def create_report(options, scan_id):
"""
sort all events, create log file in HTML/TEXT/JSON and remove old logs
Args:
options: parsing options
scan_id: scan unique id
Returns:
True if success otherwise None
"""
all_scan_logs = get_logs_by_scan_id(scan_id)
if not all_scan_logs:
log.info(_("no_events_for_report"))
return True
report_path_filename = options.report_path_filename
if (len(report_path_filename) >= 5 and report_path_filename[-5:] == ".html") or (
len(report_path_filename) >= 4 and report_path_filename[-4:] == ".htm"
):
if options.graph_name:
html_graph = build_graph(options.graph_name, all_scan_logs)
else:
html_graph = ""
from nettacker.lib.html_log import log_data
html_table_content = log_data.table_title.format(
html_graph,
log_data.css_1,
"date",
"target",
"module_name",
"port",
"logs",
"json_event",
)
index = 1
for event in all_scan_logs:
log_list = merge_logs_to_list(json.loads(event["json_event"]), [])
html_table_content += log_data.table_items.format(
event["date"],
event["target"],
event["module_name"],
event["port"],
"<br>".join(log_list) if log_list else "Detected", # event["event"], #log
index,
html.escape(event["json_event"]),
)
index += 1
html_table_content += (
log_data.table_end
+ '<div id="json_length">'
+ str(index - 1)
+ "</div>"
+ '<p class="footer">'
+ _("nettacker_version_details").format(version_info()[0], version_info()[1], now())
+ "</p>"
+ log_data.json_parse_js
)
with open(report_path_filename, "w", encoding="utf-8") as report_file:
report_file.write(html_table_content + "\n")
report_file.close()
elif len(report_path_filename) >= 5 and report_path_filename[-5:] == ".json":
with open(report_path_filename, "w", encoding="utf-8") as report_file:
report_file.write(str(json.dumps(all_scan_logs)) + "\n")
report_file.close()
elif len(report_path_filename) >= 5 and report_path_filename[-4:] == ".csv":
keys = all_scan_logs[0].keys()
with open(report_path_filename, "a") as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=keys)
writer.writeheader()
for log_list in all_scan_logs:
dict_data = {key: value for key, value in log_list.items() if key in keys}
writer.writerow(dict_data)
csvfile.close()
else:
with open(report_path_filename, "w", encoding="utf-8") as report_file:
report_file.write(build_text_table(all_scan_logs))
log.write(build_text_table(all_scan_logs))
submit_report_to_db(
{
"date": datetime.now(),
"scan_id": scan_id,
"options": vars(options),
}
)
log.info(_("file_saved").format(report_path_filename))
return True

View File

@ -1,11 +1,8 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import json
import netaddr
import requests
from netaddr import iprange_to_cidrs
from netaddr import IPNetwork
from netaddr import iprange_to_cidrs, IPNetwork
def generate_ip_range(ip_range):
@ -18,13 +15,13 @@ def generate_ip_range(ip_range):
Returns:
an array with CIDRs
"""
if '/' in ip_range:
return [
ip.format() for ip in [cidr for cidr in IPNetwork(ip_range)]
]
if "/" in ip_range:
return [ip.format() for ip in [cidr for cidr in IPNetwork(ip_range)]]
else:
ips = []
for generator_ip_range in [cidr.iter_hosts() for cidr in iprange_to_cidrs(*ip_range.rsplit('-'))]:
for generator_ip_range in [
cidr.iter_hosts() for cidr in iprange_to_cidrs(*ip_range.rsplit("-"))
]:
for ip in generator_ip_range:
ips.append(ip.format())
return ips
@ -44,9 +41,9 @@ def get_ip_range(ip):
return generate_ip_range(
json.loads(
requests.get(
'https://rest.db.ripe.net/search.json?query-string={ip}&flags=no-filtering'.format(ip=ip)
f"https://rest.db.ripe.net/search.json?query-string={ip}&flags=no-filtering"
).content
)['objects']['object'][0]['primary-key']['attribute'][0]['value'].replace(' ', '')
)["objects"]["object"][0]["primary-key"]["attribute"][0]["value"].replace(" ", "")
)
except Exception:
return [ip]
@ -67,14 +64,24 @@ def is_single_ipv4(ip):
def is_ipv4_range(ip_range):
try:
return '/' in ip_range and '.' in ip_range and '-' not in ip_range and netaddr.IPNetwork(ip_range)
return (
"/" in ip_range
and "." in ip_range
and "-" not in ip_range
and netaddr.IPNetwork(ip_range)
)
except Exception:
return False
def is_ipv4_cidr(ip_range):
try:
return '/' not in ip_range and '.' in ip_range and '-' in ip_range and iprange_to_cidrs(*ip_range.split('-'))
return (
"/" not in ip_range
and "." in ip_range
and "-" in ip_range
and iprange_to_cidrs(*ip_range.split("-"))
)
except Exception:
return False
@ -94,13 +101,23 @@ def is_single_ipv6(ip):
def is_ipv6_range(ip_range):
try:
return '/' not in ip_range and ':' in ip_range and '-' in ip_range and iprange_to_cidrs(*ip_range.split('-'))
return (
"/" not in ip_range
and ":" in ip_range
and "-" in ip_range
and iprange_to_cidrs(*ip_range.split("-"))
)
except Exception:
return False
def is_ipv6_cidr(ip_range):
try:
return '/' in ip_range and ':' in ip_range and '-' not in ip_range and netaddr.IPNetwork(ip_range)
return (
"/" in ip_range
and ":" in ip_range
and "-" not in ip_range
and netaddr.IPNetwork(ip_range)
)
except Exception:
return False

View File

View File

@ -0,0 +1,311 @@
import copy
import json
import re
import time
from abc import ABC
from datetime import datetime
import yaml
from core.utils.common import merge_logs_to_list
from nettacker.config import Config
from nettacker.core.messages import messages as _
from nettacker.database.db import find_temp_events, submit_temp_logs_to_db, submit_logs_to_db
from nettacker.logger import get_logger, TerminalCodes
log = get_logger()
class BaseLibrary(ABC):
"""Nettacker library base class."""
client = None
def brute_force(self):
"""Brute force method."""
class BaseEngine(ABC):
"""Nettacker engine base class."""
library = None
def apply_extra_data(self, *args, **kwargs):
"""Add extra data into step context."""
def filter_large_content(self, content, filter_rate=150):
if len(content) <= filter_rate:
return content
else:
filter_rate -= 1
filter_index = filter_rate
for char in content[filter_rate:]:
if char == " ":
return content[0:filter_index] + _("filtered_content")
else:
filter_index += 1
return content
def get_dependent_results_from_database(self, target, module_name, scan_id, event_names):
events = []
for event_name in event_names.split(","):
while True:
event = find_temp_events(target, module_name, scan_id, event_name)
if event:
events.append(json.loads(event.event)["response"]["conditions_results"])
break
time.sleep(0.1)
return events
def find_and_replace_dependent_values(self, sub_step, dependent_on_temp_event):
if isinstance(sub_step, dict):
for key in copy.deepcopy(sub_step):
if not isinstance(sub_step[key], (str, float, int, bytes)):
sub_step[key] = self.find_and_replace_dependent_values(
copy.deepcopy(sub_step[key]), dependent_on_temp_event
)
else:
if isinstance(sub_step[key], str):
if "dependent_on_temp_event" in sub_step[key]:
globals().update(locals())
generate_new_step = copy.deepcopy(sub_step[key])
key_name = re.findall(
re.compile(
"dependent_on_temp_event\\[\\S+\\]\\['\\S+\\]\\[\\S+\\]"
),
generate_new_step,
)[0]
try:
key_value = eval(key_name)
except Exception:
key_value = "error"
sub_step[key] = sub_step[key].replace(key_name, key_value)
if isinstance(sub_step, list):
value_index = 0
for key in copy.deepcopy(sub_step):
if type(sub_step[value_index]) not in [str, float, int, bytes]:
sub_step[key] = self.find_and_replace_dependent_values(
copy.deepcopy(sub_step[value_index]), dependent_on_temp_event
)
else:
if isinstance(sub_step[value_index], str):
if "dependent_on_temp_event" in sub_step[value_index]:
globals().update(locals())
generate_new_step = copy.deepcopy(sub_step[key])
key_name = re.findall(
re.compile("dependent_on_temp_event\\['\\S+\\]\\[\\S+\\]"),
generate_new_step,
)[0]
try:
key_value = eval(key_name)
except Exception:
key_value = "error"
sub_step[value_index] = sub_step[value_index].replace(
key_name, key_value
)
value_index += 1
return sub_step
def process_conditions(
self,
event,
module_name,
target,
scan_id,
options,
response,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests,
):
if "save_to_temp_events_only" in event.get("response", ""):
submit_temp_logs_to_db(
{
"date": datetime.now(),
"target": target,
"module_name": module_name,
"scan_id": scan_id,
"event_name": event["response"]["save_to_temp_events_only"],
"port": event.get("ports", ""),
"event": event,
"data": response,
}
)
if event["response"]["conditions_results"] and "save_to_temp_events_only" not in event.get(
"response", ""
):
# remove sensitive information before submitting to db
options = copy.deepcopy(options)
for key in Config.api:
try:
del options[key]
except KeyError:
continue
del event["response"]["conditions"]
del event["response"]["condition_type"]
if "log" in event["response"]:
del event["response"]["log"]
event_request_keys = copy.deepcopy(event)
del event_request_keys["response"]
submit_logs_to_db(
{
"date": datetime.now(),
"target": target,
"module_name": module_name,
"scan_id": scan_id,
"port": event.get("ports")
or event.get("port")
or (
event.get("url").split(":")[2].split("/")[0]
if isinstance(event.get("url"), str)
and len(event.get("url").split(":")) >= 3
and event.get("url").split(":")[2].split("/")[0].isdigit()
else ""
),
"event": " ".join(yaml.dump(event_request_keys).split())
+ " conditions: "
+ " ".join(yaml.dump(event["response"]["conditions_results"]).split()),
"json_event": event,
}
)
log_list = merge_logs_to_list(event["response"]["conditions_results"])
if log_list:
log.success_event_info(
_("send_success_event_from_module").format(
process_number,
module_name,
target,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests,
" ",
self.filter_large_content(
"\n".join(
[
TerminalCodes.PURPLE.value + key + TerminalCodes.RESET.value
for key in log_list
]
),
filter_rate=100000,
),
)
)
else:
log.success_event_info(
_("send_success_event_from_module").format(
process_number,
module_name,
target,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests,
" ".join(
[
TerminalCodes.YELLOW.value + key + TerminalCodes.RESET.value
if ":" in key
else TerminalCodes.GREEN.value + key + TerminalCodes.RESET.value
for key in yaml.dump(event_request_keys).split()
]
),
self.filter_large_content(
"conditions: "
+ " ".join(
[
TerminalCodes.PURPLE.value + key + TerminalCodes.RESET.value
if ":" in key
else TerminalCodes.GREEN.value
+ key
+ TerminalCodes.RESET.value
for key in yaml.dump(
event["response"]["conditions_results"]
).split()
]
),
filter_rate=150,
),
)
)
log.verbose_info(json.dumps(event))
return True
else:
del event["response"]["conditions"]
log.verbose_info(
_("send_unsuccess_event_from_module").format(
process_number,
module_name,
target,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests,
)
)
log.verbose_info(json.dumps(event))
return "save_to_temp_events_only" in event["response"]
def replace_dependent_values(self, sub_step, dependent_on_temp_event):
return self.find_and_replace_dependent_values(sub_step, dependent_on_temp_event)
def run(
self,
sub_step,
module_name,
target,
scan_id,
options,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests,
):
"""Engine entry point."""
backup_method = copy.deepcopy(sub_step["method"])
backup_response = copy.deepcopy(sub_step["response"])
del sub_step["method"]
del sub_step["response"]
for attr_name in ("ports", "usernames", "passwords"):
if attr_name in sub_step:
value = sub_step.pop(attr_name)
sub_step[attr_name.rstrip("s")] = int(value) if attr_name == "ports" else value
if "dependent_on_temp_event" in backup_response:
temp_event = self.get_dependent_results_from_database(
target, module_name, scan_id, backup_response["dependent_on_temp_event"]
)
sub_step = self.replace_dependent_values(sub_step, temp_event)
action = getattr(self.library(), backup_method)
for _i in range(options["retries"]):
try:
response = action(**sub_step)
break
except Exception:
response = []
sub_step["method"] = backup_method
sub_step["response"] = backup_response
sub_step["response"]["conditions_results"] = response
self.apply_extra_data(sub_step, response)
return self.process_conditions(
sub_step,
module_name,
target,
scan_id,
options,
response,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests,
)

View File

@ -0,0 +1,24 @@
import ftplib
from nettacker.core.lib.base import BaseEngine, BaseLibrary
class FtpLibrary(BaseLibrary):
client = ftplib.FTP
def brute_force(self, host, port, username, password, timeout):
connection = self.client(timeout=timeout)
connection.connect(host, port)
connection.login(username, password)
connection.close()
return {
"host": host,
"port": port,
"username": username,
"password": password,
}
class FtpEngine(BaseEngine):
library = FtpLibrary

View File

@ -0,0 +1,11 @@
import ftplib
from nettacker.core.lib.ftp import FtpEngine, FtpLibrary
class FtpsLibrary(FtpLibrary):
client = ftplib.FTP_TLS
class FtpsEngine(FtpEngine):
library = FtpsLibrary

View File

@ -0,0 +1,208 @@
#!/usr/bin/env python
import asyncio
import copy
import random
import re
import time
import aiohttp
from core.utils.common import replace_dependent_response, reverse_and_regex_condition
from nettacker.core.lib.base import BaseEngine
async def perform_request_action(action, request_options):
start_time = time.time()
async with action(**request_options) as response:
return {
"reason": response.reason,
"status_code": str(response.status),
"content": await response.content.read(),
"headers": dict(response.headers),
"responsetime": time.time() - start_time,
}
async def send_request(request_options, method):
async with aiohttp.ClientSession() as session:
action = getattr(session, method, None)
response = await asyncio.gather(
*[asyncio.ensure_future(perform_request_action(action, request_options))]
)
return response[0]
def response_conditions_matched(sub_step, response):
if not response:
return {}
condition_type = sub_step["response"]["condition_type"]
conditions = sub_step["response"]["conditions"]
condition_results = {}
for condition in conditions:
if condition in ["reason", "status_code", "content"]:
regex = re.findall(re.compile(conditions[condition]["regex"]), response[condition])
reverse = conditions[condition]["reverse"]
condition_results[condition] = reverse_and_regex_condition(regex, reverse)
if condition == "headers":
# convert headers to case insensitive dict
for key in response["headers"].copy():
response["headers"][key.lower()] = response["headers"][key]
condition_results["headers"] = {}
for header in conditions["headers"]:
reverse = conditions["headers"][header]["reverse"]
try:
regex = re.findall(
re.compile(conditions["headers"][header]["regex"]),
response["headers"][header.lower()]
if header.lower() in response["headers"]
else False,
)
condition_results["headers"][header] = reverse_and_regex_condition(
regex, reverse
)
except TypeError:
condition_results["headers"][header] = []
if condition == "responsetime":
if len(conditions[condition].split()) == 2 and conditions[condition].split()[0] in [
"==",
"!=",
">=",
"<=",
">",
"<",
]:
exec(
"condition_results['responsetime'] = response['responsetime'] if ("
+ "response['responsetime'] {0} float(conditions['responsetime'].split()[-1])".format(
conditions["responsetime"].split()[0]
)
+ ") else []"
)
else:
condition_results["responsetime"] = []
if condition_type.lower() == "or":
# if one of the values are matched, it will be a string or float object in the array
# we count False in the array and if it's not all []; then we know one of the conditions
# is matched.
if (
"headers" not in condition_results
and (
list(condition_results.values()).count([]) != len(list(condition_results.values()))
)
) or (
"headers" in condition_results
and (
len(list(condition_results.values()))
+ len(list(condition_results["headers"].values()))
- list(condition_results.values()).count([])
- list(condition_results["headers"].values()).count([])
- 1
!= 0
)
):
if sub_step["response"].get("log", False):
condition_results["log"] = sub_step["response"]["log"]
if "response_dependent" in condition_results["log"]:
condition_results["log"] = replace_dependent_response(
condition_results["log"], condition_results
)
return condition_results
else:
return {}
if condition_type.lower() == "and":
if [] in condition_results.values() or (
"headers" in condition_results and [] in condition_results["headers"].values()
):
return {}
else:
if sub_step["response"].get("log", False):
condition_results["log"] = sub_step["response"]["log"]
if "response_dependent" in condition_results["log"]:
condition_results["log"] = replace_dependent_response(
condition_results["log"], condition_results
)
return condition_results
return {}
class HttpEngine(BaseEngine):
def run(
self,
sub_step,
module_name,
target,
scan_id,
options,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests,
):
backup_method = copy.deepcopy(sub_step["method"])
backup_response = copy.deepcopy(sub_step["response"])
backup_iterative_response_match = copy.deepcopy(
sub_step["response"]["conditions"].get("iterative_response_match", None)
)
if options["user_agent"] == "random_user_agent":
sub_step["headers"]["User-Agent"] = random.choice(options["user_agents"])
del sub_step["method"]
if "dependent_on_temp_event" in backup_response:
temp_event = self.get_dependent_results_from_database(
target,
module_name,
scan_id,
backup_response["dependent_on_temp_event"],
)
sub_step = self.replace_dependent_values(sub_step, temp_event)
backup_response = copy.deepcopy(sub_step["response"])
del sub_step["response"]
for _i in range(options["retries"]):
try:
response = asyncio.run(send_request(sub_step, backup_method))
response["content"] = response["content"].decode(errors="ignore")
break
except Exception:
response = []
sub_step["method"] = backup_method
sub_step["response"] = backup_response
if backup_iterative_response_match is not None:
backup_iterative_response_match = copy.deepcopy(
sub_step["response"]["conditions"].get("iterative_response_match")
)
del sub_step["response"]["conditions"]["iterative_response_match"]
sub_step["response"]["conditions_results"] = response_conditions_matched(
sub_step, response
)
if backup_iterative_response_match is not None and (
sub_step["response"]["conditions_results"]
or sub_step["response"]["condition_type"] == "or"
):
sub_step["response"]["conditions"][
"iterative_response_match"
] = backup_iterative_response_match
for key in sub_step["response"]["conditions"]["iterative_response_match"]:
result = response_conditions_matched(
sub_step["response"]["conditions"]["iterative_response_match"][key],
response,
)
if result:
sub_step["response"]["conditions_results"][key] = result
return self.process_conditions(
sub_step,
module_name,
target,
scan_id,
options,
response,
process_number,
module_thread_number,
total_module_thread_number,
request_number_counter,
total_number_of_requests,
)

View File

@ -0,0 +1,24 @@
import poplib
from nettacker.core.lib.base import BaseEngine, BaseLibrary
class Pop3Library(BaseLibrary):
client = poplib.POP3
def brute_force(self, host, port, username, password, timeout):
connection = self.client(host, port=port, timeout=timeout)
connection.user(username)
connection.pass_(password)
connection.quit()
return {
"host": host,
"port": port,
"username": username,
"password": password,
}
class Pop3Engine(BaseEngine):
library = Pop3Library

View File

@ -0,0 +1,11 @@
import poplib
from nettacker.core.lib.pop3 import Pop3Engine, Pop3Library
class Pop3sLibrary(Pop3Library):
client = poplib.POP3_SSL
class Pop3sEngine(Pop3Engine):
library = Pop3sLibrary

View File

@ -0,0 +1,23 @@
import smtplib
from nettacker.core.lib.base import BaseEngine, BaseLibrary
class SmtpLibrary(BaseLibrary):
client = smtplib.SMTP
def brute_force(self, host, port, username, password, timeout):
connection = self.client(host, port, timeout=timeout)
connection.login(username, password)
connection.close()
return {
"host": host,
"port": port,
"username": username,
"password": password,
}
class SmtpEngine(BaseEngine):
library = SmtpLibrary

View File

@ -0,0 +1,23 @@
import smtplib
from nettacker.core.lib.base import BaseEngine, BaseLibrary
class SmtpsLibrary(BaseLibrary):
client = smtplib.SMTP
def brute_force(self, host, port, username, password, timeout):
connection = self.client(host, port, timeout=timeout)
connection.starttls()
connection.login(username, password)
connection.close()
return {
"host": host,
"port": port,
"username": username,
"password": password,
}
class SmtpsEngine(BaseEngine):
library = SmtpsLibrary

View File

@ -0,0 +1,265 @@
#!/usr/bin/env python
import copy
import logging
import os
import re
import select
import socket
import ssl
import struct
import time
from core.utils.common import reverse_and_regex_condition
from nettacker.core.lib.base import BaseEngine, BaseLibrary
log = logging.getLogger(__name__)
def create_tcp_socket(host, port, timeout):
try:
socket_connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
socket_connection.settimeout(timeout)
socket_connection.connect((host, port))
ssl_flag = False
except ConnectionRefusedError:
return None
try:
socket_connection = ssl.wrap_socket(socket_connection)
ssl_flag = True
except Exception:
socket_connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
socket_connection.settimeout(timeout)
socket_connection.connect((host, port))
# finally:
# socket_connection.shutdown()
return socket_connection, ssl_flag
class SocketLibrary(BaseLibrary):
def tcp_connect_only(self, host, port, timeout):
tcp_socket = create_tcp_socket(host, port, timeout)
if tcp_socket is None:
return None
socket_connection, ssl_flag = tcp_socket
peer_name = socket_connection.getpeername()
socket_connection.close()
return {
"peer_name": peer_name,
"service": socket.getservbyport(int(port)),
"ssl_flag": ssl_flag,
}
def tcp_connect_send_and_receive(self, host, port, timeout):
tcp_socket = create_tcp_socket(host, port, timeout)
if tcp_socket is None:
return None
socket_connection, ssl_flag = tcp_socket
peer_name = socket_connection.getpeername()
try:
socket_connection.send(b"ABC\x00\r\n\r\n\r\n" * 10)
response = socket_connection.recv(1024 * 1024 * 10)
socket_connection.close()
# except ConnectionRefusedError:
# return None
except Exception:
try:
socket_connection.close()
response = b""
except Exception:
response = b""
return {
"peer_name": peer_name,
"service": socket.getservbyport(port),
"response": response.decode(errors="ignore"),
"ssl_flag": ssl_flag,
}
def socket_icmp(self, host, timeout):
"""
A pure python ping implementation using raw socket.
Note that ICMP messages can only be sent from processes running as root.
Derived from ping.c distributed in Linux's netkit. That code is
copyright (c) 1989 by The Regents of the University of California.
That code is in turn derived from code written by Mike Muuss of the
US Army Ballistic Research Laboratory in December, 1983 and
placed in the public domain. They have my thanks.
Bugs are naturally mine. I'd be glad to hear about them. There are
certainly word - size dependenceies here.
Copyright (c) Matthew Dixon Cowles, <http://www.visi.com/~mdc/>.
Distributable under the terms of the GNU General Public License
version 2. Provided with no warranties of any sort.
Original Version from Matthew Dixon Cowles:
-> ftp://ftp.visi.com/users/mdc/ping.py
Rewrite by Jens Diemer:
-> http://www.python-forum.de/post-69122.html#69122
Rewrite by George Notaras:
-> http://www.g-loaded.eu/2009/10/30/python-ping/
Fork by Pierre Bourdon:
-> http://bitbucket.org/delroth/python-ping/
Revision history
~~~~~~~~~~~~~~~~
November 22, 1997
-----------------
Initial hack. Doesn't do much, but rather than try to guess
what features I (or others) will want in the future, I've only
put in what I need now.
December 16, 1997
-----------------
For some reason, the checksum bytes are in the wrong order when
this is run under Solaris 2.X for SPARC but it works right under
Linux x86. Since I don't know just what's wrong, I'll swap the
bytes always and then do an htons().
December 4, 2000
----------------
Changed the struct.pack() calls to pack the checksum and ID as
unsigned. My thanks to Jerome Poincheval for the fix.
May 30, 2007
------------
little rewrite by Jens Diemer:
- change socket asterisk import to a normal import
- replace time.time() with time.clock()
- delete "return None" (or change to "return" only)
- in checksum() rename "str" to "source_string"
November 8, 2009
----------------
Improved compatibility with GNU/Linux systems.
Fixes by:
* George Notaras -- http://www.g-loaded.eu
Reported by:
* Chris Hallman -- http://cdhallman.blogspot.com
Changes in this release:
- Re-use time.time() instead of time.clock(). The 2007 implementation
worked only under Microsoft Windows. Failed on GNU/Linux.
time.clock() behaves differently under the two OSes[1].
[1] http://docs.python.org/library/time.html#time.clock
September 25, 2010
------------------
Little modifications by Georgi Kolev:
- Added quiet_ping function.
- returns percent lost packages, max round trip time, avrg round trip
time
- Added packet size to verbose_ping & quiet_ping functions.
- Bump up version to 0.2
------------------
5 Aug 2021 - Modified by Ali Razmjoo Qalaei (Reformat the code and more human readable)
"""
icmp_socket = socket.getprotobyname("icmp")
socket_connection = socket.socket(socket.AF_INET, socket.SOCK_RAW, icmp_socket)
random_integer = os.getpid() & 0xFFFF
icmp_echo_request = 8
# Make a dummy header with a 0 checksum.
dummy_checksum = 0
header = struct.pack("bbHHh", icmp_echo_request, 0, dummy_checksum, random_integer, 1)
data = (
struct.pack("d", time.time())
+ struct.pack("d", time.time())
+ str((76 - struct.calcsize("d")) * "Q").encode()
) # packet size = 76 (removed 8 bytes size of header)
source_string = header + data
# Calculate the checksum on the data and the dummy header.
calculate_data = 0
max_size = (len(source_string) / 2) * 2
counter = 0
while counter < max_size:
calculate_data += source_string[counter + 1] * 256 + source_string[counter]
calculate_data = calculate_data & 0xFFFFFFFF # Necessary?
counter += 2
if max_size < len(source_string):
calculate_data += source_string[len(source_string) - 1]
calculate_data = calculate_data & 0xFFFFFFFF # Necessary?
calculate_data = (calculate_data >> 16) + (calculate_data & 0xFFFF)
calculate_data = calculate_data + (calculate_data >> 16)
calculated_data = ~calculate_data & 0xFFFF
# Swap bytes. Bugger me if I know why.
dummy_checksum = calculated_data >> 8 | (calculated_data << 8 & 0xFF00)
header = struct.pack(
"bbHHh",
icmp_echo_request,
0,
socket.htons(dummy_checksum),
random_integer,
1,
)
socket_connection.sendto(
header + data, (socket.gethostbyname(host), 1)
) # Don't know about the 1
while True:
started_select = time.time()
what_ready = select.select([socket_connection], [], [], timeout)
how_long_in_select = time.time() - started_select
if not what_ready[0]: # Timeout
break
time_received = time.time()
received_packet, address = socket_connection.recvfrom(1024)
icmp_header = received_packet[20:28]
(
packet_type,
packet_code,
packet_checksum,
packet_id,
packet_sequence,
) = struct.unpack("bbHHh", icmp_header)
if packet_id == random_integer:
packet_bytes = struct.calcsize("d")
time_sent = struct.unpack("d", received_packet[28 : 28 + packet_bytes])[0]
delay = time_received - time_sent
break
timeout = timeout - how_long_in_select
if timeout <= 0:
break
socket_connection.close()
return {"host": host, "response_time": delay, "ssl_flag": False}
class SocketEngine(BaseEngine):
library = SocketLibrary
def response_conditions_matched(self, sub_step, response):
conditions = sub_step["response"]["conditions"]
condition_type = sub_step["response"]["condition_type"]
condition_results = {}
if sub_step["method"] == "tcp_connect_only":
return response
if sub_step["method"] == "tcp_connect_send_and_receive":
if response:
received_content = response["response"]
for condition in conditions:
regex = re.findall(
re.compile(conditions[condition]["regex"]), received_content
)
reverse = conditions[condition]["reverse"]
condition_results[condition] = reverse_and_regex_condition(regex, reverse)
for condition in copy.deepcopy(condition_results):
if not condition_results[condition]:
del condition_results[condition]
if "open_port" in condition_results and len(condition_results) > 1:
del condition_results["open_port"]
del conditions["open_port"]
if condition_type == "and":
return condition_results if len(condition_results) == len(conditions) else []
if condition_type == "or":
return condition_results if condition_results else []
return []
if sub_step["method"] == "socket_icmp":
return response
return []
def apply_extra_data(self, sub_step, response):
sub_step["response"]["ssl_flag"] = (
response["ssl_flag"] if isinstance(response, dict) else False
)
sub_step["response"]["conditions_results"] = self.response_conditions_matched(
sub_step, response
)

View File

@ -0,0 +1,39 @@
import logging
from paramiko import SSHClient, AutoAddPolicy
from paramiko.auth_strategy import NoneAuth, Password
from nettacker.core.lib.base import BaseEngine, BaseLibrary
logging.getLogger("paramiko.transport").disabled = True
class SshLibrary(BaseLibrary):
def brute_force(self, *args, **kwargs):
host = kwargs["host"]
port = kwargs["port"]
username = kwargs["username"]
password = kwargs["password"]
connection = SSHClient()
connection.set_missing_host_key_policy(AutoAddPolicy())
connection.connect(
**{
"hostname": host,
"port": port,
"auth_strategy": Password(username=username, password_getter=lambda: password)
if password
else NoneAuth(username=username),
}
)
return {
"host": host,
"port": port,
"username": username,
"password": password,
}
class SshEngine(BaseEngine):
library = SshLibrary

View File

@ -0,0 +1,26 @@
import telnetlib
from nettacker.core.lib.base import BaseEngine, BaseLibrary
class TelnetLibrary(BaseLibrary):
client = telnetlib.Telnet
def brute_force(host, port, username, password, timeout):
connection = telnetlib.Telnet(host, port, timeout)
connection.read_until(b"login: ")
connection.write(username + "\n")
connection.read_until(b"Password: ")
connection.write(password + "\n")
connection.close()
return {
"host": host,
"port": port,
"username": username,
"password": password,
}
class TelnetEngine(BaseEngine):
library = TelnetLibrary

View File

@ -0,0 +1,71 @@
import sys
from io import StringIO
import yaml
from nettacker.config import Config
from nettacker.core.utils.common import find_args_value
def application_language():
if "-L" in sys.argv:
language = find_args_value("-L") or "en"
elif "--language" in sys.argv:
language = find_args_value("--language") or "en"
else:
language = Config.settings.language
if language not in get_languages():
language = "en"
return language
def load_yaml(filename):
return yaml.load(StringIO(open(filename, "r").read()), Loader=yaml.FullLoader)
def get_languages():
"""
Get available languages
Returns:
an array of languages
"""
languages_list = []
for language in Config.path.locale_dir.glob("*.yaml"):
languages_list.append(str(language).split("/")[-1].split(".")[0])
return list(set(languages_list))
class load_message:
def __init__(self):
self.language = application_language()
self.messages = load_yaml(
"{messages_path}/{language}.yaml".format(
messages_path=Config.path.locale_dir, language=self.language
)
)
if self.language != "en":
self.messages_en = load_yaml(
"{messages_path}/en.yaml".format(messages_path=Config.path.locale_dir)
)
for message_id in self.messages_en:
if message_id not in self.messages:
self.messages[message_id] = self.messages_en[message_id]
message_cache = load_message().messages
def messages(msg_id):
"""
load a message from message library with specified language
Args:
msg_id: message id
Returns:
the message content in the selected language if
message found otherwise return message in English
"""
return message_cache[str(msg_id)]

View File

@ -0,0 +1,181 @@
import copy
import importlib
import json
import os
import time
from threading import Thread
from nettacker import logger
from nettacker.config import Config
from nettacker.core.messages import messages as _
from nettacker.core.template import TemplateLoader
from nettacker.core.utils.common import expand_module_steps, wait_for_threads_to_finish
from nettacker.database.db import find_events
log = logger.get_logger()
class Module:
def __init__(
self,
module_name,
options,
target,
scan_id,
process_number,
thread_number,
total_number_threads,
):
self.module_name = module_name
self.process_number = process_number
self.module_thread_number = thread_number
self.total_module_thread_number = total_number_threads
self.module_inputs = options.__dict__
self.module_inputs["target"] = target
if options.modules_extra_args:
for module_extra_args in self.module_inputs["modules_extra_args"]:
self.module_inputs[module_extra_args] = self.module_inputs["modules_extra_args"][
module_extra_args
]
self.target = target
self.scan_id = scan_id
self.skip_service_discovery = options.skip_service_discovery
self.discovered_services = None
self.ignored_core_modules = ["subdomain_scan", "icmp_scan", "port_scan"]
contents = TemplateLoader("port_scan", {"target": ""}).load()
self.service_discovery_signatures = list(
set(contents["payloads"][0]["steps"][0]["response"]["conditions"].keys())
)
self.libraries = [
module_protocol.split(".py")[0]
for module_protocol in os.listdir(Config.path.module_protocols_dir)
if module_protocol.endswith(".py")
and module_protocol not in {"__init__.py", "base.py"}
]
def load(self):
self.module_content = TemplateLoader(self.module_name, self.module_inputs).load()
if not self.skip_service_discovery and self.module_name not in self.ignored_core_modules:
services = {}
for service in find_events(self.target, "port_scan", self.scan_id):
service_event = json.loads(service.json_event)
port = service_event["port"]
protocols = service_event["response"]["conditions_results"].keys()
for protocol in protocols:
if protocol and protocol in self.libraries:
if protocol in services:
services[protocol].append(port)
else:
services[protocol] = [port]
self.discovered_services = copy.deepcopy(services)
index_payload = 0
for payload in copy.deepcopy(self.module_content["payloads"]):
if (
payload["library"] not in self.discovered_services
and payload["library"] in self.service_discovery_signatures
):
del self.module_content["payloads"][index_payload]
index_payload -= 1
else:
index_step = 0
for step in copy.deepcopy(
self.module_content["payloads"][index_payload]["steps"]
):
step = TemplateLoader.parse(
step, {"port": self.discovered_services[payload["library"]]}
)
self.module_content["payloads"][index_payload]["steps"][index_step] = step
index_step += 1
index_payload += 1
def generate_loops(self):
self.module_content["payloads"] = expand_module_steps(self.module_content["payloads"])
def sort_loops(self):
steps = []
for index in range(len(self.module_content["payloads"])):
for step in copy.deepcopy(self.module_content["payloads"][index]["steps"]):
if "dependent_on_temp_event" not in step[0]["response"]:
steps.append(step)
for step in copy.deepcopy(self.module_content["payloads"][index]["steps"]):
if (
"dependent_on_temp_event" in step[0]["response"]
and "save_to_temp_events_only" in step[0]["response"]
):
steps.append(step)
for step in copy.deepcopy(self.module_content["payloads"][index]["steps"]):
if (
"dependent_on_temp_event" in step[0]["response"]
and "save_to_temp_events_only" not in step[0]["response"]
):
steps.append(step)
self.module_content["payloads"][index]["steps"] = steps
def start(self):
active_threads = []
# counting total number of requests
total_number_of_requests = 0
for payload in self.module_content["payloads"]:
if payload["library"] not in self.libraries:
log.warn(_("library_not_supported").format(payload["library"]))
return None
for step in payload["steps"]:
total_number_of_requests += len(step)
request_number_counter = 0
for payload in self.module_content["payloads"]:
library = payload["library"]
engine = getattr(
importlib.import_module(f"nettacker.core.lib.{library.lower()}"),
f"{library.capitalize()}Engine",
)()
for step in payload["steps"]:
for sub_step in step:
thread = Thread(
target=engine.run,
args=(
sub_step,
self.module_name,
self.target,
self.scan_id,
self.module_inputs,
self.process_number,
self.module_thread_number,
self.total_module_thread_number,
request_number_counter,
total_number_of_requests,
),
)
thread.name = f"{self.target} -> {self.module_name} -> {sub_step}"
request_number_counter += 1
log.verbose_event_info(
_("sending_module_request").format(
self.process_number,
self.module_name,
self.target,
self.module_thread_number,
self.total_module_thread_number,
request_number_counter,
total_number_of_requests,
)
)
thread.start()
time.sleep(self.module_inputs["time_sleep_between_requests"])
active_threads.append(thread)
wait_for_threads_to_finish(
active_threads,
maximum=self.module_inputs["thread_per_host"],
terminable=True,
)
wait_for_threads_to_finish(active_threads, maximum=None, terminable=True)

View File

@ -11,29 +11,30 @@ def getaddrinfo(*args):
Returns:
getaddrinfo
"""
return [(socket.AF_INET, socket.SOCK_STREAM, 6, '', (args[0], args[1]))]
return [(socket.AF_INET, socket.SOCK_STREAM, 6, "", (args[0], args[1]))]
def set_socks_proxy(socks_proxy):
if socks_proxy:
import socks
socks_version = socks.SOCKS5 if socks_proxy.startswith('socks5://') else socks.SOCKS4
socks_proxy = socks_proxy.split('://')[1] if '://' in socks_proxy else socks_proxy
if '@' in socks_proxy:
socks_username = socks_proxy.split(':')[0]
socks_password = socks_proxy.split(':')[1].split('@')[0]
socks_version = socks.SOCKS5 if socks_proxy.startswith("socks5://") else socks.SOCKS4
socks_proxy = socks_proxy.split("://")[1] if "://" in socks_proxy else socks_proxy
if "@" in socks_proxy:
socks_username = socks_proxy.split(":")[0]
socks_password = socks_proxy.split(":")[1].split("@")[0]
socks.set_default_proxy(
socks_version,
str(socks_proxy.rsplit('@')[1].rsplit(':')[0]), # hostname
int(socks_proxy.rsplit(':')[-1]), # port
str(socks_proxy.rsplit("@")[1].rsplit(":")[0]), # hostname
int(socks_proxy.rsplit(":")[-1]), # port
username=socks_username,
password=socks_password
password=socks_password,
)
else:
socks.set_default_proxy(
socks_version,
str(socks_proxy.rsplit(':')[0]), # hostname
int(socks_proxy.rsplit(':')[1]) # port
str(socks_proxy.rsplit(":")[0]), # hostname
int(socks_proxy.rsplit(":")[1]), # port
)
return socks.socksocket, getaddrinfo
else:

View File

@ -0,0 +1,42 @@
import copy
import yaml
from nettacker.config import Config
class TemplateLoader:
def __init__(self, name, inputs=None) -> None:
self.name = name
self.inputs = inputs or {}
@staticmethod
def parse(module_content, module_inputs):
if isinstance(module_content, dict):
for key in copy.deepcopy(module_content):
if key in module_inputs:
if module_inputs[key]:
module_content[key] = module_inputs[key]
elif isinstance(module_content[key], (dict, list)):
module_content[key] = TemplateLoader.parse(module_content[key], module_inputs)
elif isinstance(module_content, list):
array_index = 0
for key in copy.deepcopy(module_content):
module_content[array_index] = TemplateLoader.parse(key, module_inputs)
array_index += 1
return module_content
def open(self):
module_name_parts = self.name.split("_")
action = module_name_parts[-1]
library = "_".join(module_name_parts[:-1])
with open(Config.path.modules_dir / action / f"{library}.yaml") as yaml_file:
return yaml_file.read()
def format(self):
return self.open().format(**self.inputs)
def load(self):
return self.parse(yaml.safe_load(self.format()), self.inputs)

View File

View File

@ -0,0 +1,326 @@
import copy
import ctypes
import datetime
import hashlib
import importlib
import math
import multiprocessing
import random
import re
import string
import sys
import time
from nettacker import logger
log = logger.get_logger()
def replace_dependent_response(log, response_dependent):
"""The `response_dependent` is needed for `eval` below."""
if str(log):
key_name = re.findall(re.compile("response_dependent\\['\\S+\\]"), log)
for i in key_name:
try:
key_value = eval(i)
except Exception:
key_value = "response dependent error"
log = log.replace(i, " ".join(key_value))
return log
def merge_logs_to_list(result, log_list=[]):
if isinstance(result, dict):
for i in result:
if "log" == i:
log_list.append(result["log"])
else:
merge_logs_to_list(result[i], log_list)
return list(set(log_list))
def reverse_and_regex_condition(regex, reverse):
if regex:
if reverse:
return []
return list(set(regex))
else:
if reverse:
return True
return []
def wait_for_threads_to_finish(threads, maximum=None, terminable=False, sub_process=False):
while threads:
try:
for thread in threads:
if not thread.is_alive():
threads.remove(thread)
if maximum and len(threads) < maximum:
break
time.sleep(0.01)
except KeyboardInterrupt:
if terminable:
for thread in threads:
terminate_thread(thread)
if sub_process:
for thread in threads:
thread.kill()
return False
return True
def terminate_thread(thread, verbose=True):
"""
kill a thread https://stackoverflow.com/a/15274929
Args:
thread: an alive thread
verbose: verbose mode/boolean
Returns:
True/None
"""
if verbose:
log.info("killing {0}".format(thread.name))
if not thread.is_alive():
return
exc = ctypes.py_object(SystemExit)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(thread.ident), exc)
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("PyThreadState_SetAsyncExc failed")
return True
def find_args_value(args_name):
try:
return sys.argv[sys.argv.index(args_name) + 1]
except Exception:
return None
def re_address_repeaters_key_name(key_name):
return "".join(["['" + _key + "']" for _key in key_name.split("/")[:-1]])
def generate_new_sub_steps(sub_steps, data_matrix, arrays):
original_sub_steps = copy.deepcopy(sub_steps)
steps_array = []
for array in data_matrix:
array_name_position = 0
for array_name in arrays:
for sub_step in sub_steps:
exec(
"original_sub_steps{key_name} = {matrix_value}".format(
key_name=re_address_repeaters_key_name(array_name),
matrix_value='"' + str(array[array_name_position]) + '"'
if isinstance(array[array_name_position], int)
or isinstance(array[array_name_position], str)
else array[array_name_position],
)
)
array_name_position += 1
steps_array.append(copy.deepcopy(original_sub_steps))
return steps_array
def find_repeaters(sub_content, root, arrays):
if isinstance(sub_content, dict) and "nettacker_fuzzer" not in sub_content:
temporary_content = copy.deepcopy(sub_content)
original_root = root
for key in sub_content:
root = original_root
root += key + "/"
temporary_content[key], _root, arrays = find_repeaters(sub_content[key], root, arrays)
sub_content = copy.deepcopy(temporary_content)
root = original_root
if (not isinstance(sub_content, (bool, int, float))) and (
isinstance(sub_content, list) or "nettacker_fuzzer" in sub_content
):
arrays[root] = sub_content
return (sub_content, root, arrays) if root != "" else arrays
class value_to_class:
def __init__(self, value):
self.value = value
def class_to_value(arrays):
original_arrays = copy.deepcopy(arrays)
array_index = 0
for array in arrays:
value_index = 0
for value in array:
if isinstance(value, value_to_class):
original_arrays[array_index][value_index] = value.value
value_index += 1
array_index += 1
return original_arrays
def generate_and_replace_md5(content):
# todo: make it betetr and document it
md5_content = content.split("NETTACKER_MD5_GENERATOR_START")[1].split(
"NETTACKER_MD5_GENERATOR_STOP"
)[0]
md5_content_backup = md5_content
if isinstance(md5_content, str):
md5_content = md5_content.encode()
md5_hash = hashlib.md5(md5_content).hexdigest()
return content.replace(
"NETTACKER_MD5_GENERATOR_START" + md5_content_backup + "NETTACKER_MD5_GENERATOR_STOP",
md5_hash,
)
def arrays_to_matrix(arrays):
import numpy
return (
numpy.array(numpy.meshgrid(*[arrays[array_name] for array_name in arrays]))
.T.reshape(-1, len(arrays.keys()))
.tolist()
)
def string_to_bytes(string):
return string.encode()
AVAILABLE_DATA_FUNCTIONS = {"passwords": {"read_from_file"}}
def apply_data_functions(data):
for item in data:
if item not in AVAILABLE_DATA_FUNCTIONS:
continue
for fn_name in data[item]:
if fn_name not in AVAILABLE_DATA_FUNCTIONS[item]:
continue
fn = getattr(importlib.import_module("nettacker.core.fuzzer"), fn_name)
data[item] = fn(*data[item][fn_name])
def fuzzer_repeater_perform(arrays):
original_arrays = copy.deepcopy(arrays)
for array_name in arrays:
if "nettacker_fuzzer" not in arrays[array_name]:
continue
data = arrays[array_name]["nettacker_fuzzer"]["data"]
apply_data_functions(data)
data_matrix = arrays_to_matrix(data)
prefix = arrays[array_name]["nettacker_fuzzer"]["prefix"]
input_format = arrays[array_name]["nettacker_fuzzer"]["input_format"]
interceptors = copy.deepcopy(arrays[array_name]["nettacker_fuzzer"]["interceptors"])
if interceptors:
interceptors = interceptors.split(",")
suffix = arrays[array_name]["nettacker_fuzzer"]["suffix"]
processed_array = []
for sub_data in data_matrix:
formatted_data = {}
index_input = 0
for value in sub_data:
formatted_data[list(data.keys())[index_input]] = value
index_input += 1
interceptors_function = ""
interceptors_function_processed = ""
if interceptors:
interceptors_function += "interceptors_function_processed = "
for interceptor in interceptors[::-1]:
interceptors_function += "{interceptor}(".format(interceptor=interceptor)
interceptors_function += "input_format.format(**formatted_data)" + str(
")" * interceptors_function.count("(")
)
expected_variables = {}
globals().update(locals())
exec(interceptors_function, globals(), expected_variables)
interceptors_function_processed = expected_variables[
"interceptors_function_processed"
]
else:
interceptors_function_processed = input_format.format(**formatted_data)
processed_sub_data = interceptors_function_processed
if prefix:
processed_sub_data = prefix + processed_sub_data
if suffix:
processed_sub_data = processed_sub_data + suffix
processed_array.append(copy.deepcopy(processed_sub_data))
original_arrays[array_name] = processed_array
return original_arrays
def expand_module_steps(content):
return [expand_protocol(x) for x in copy.deepcopy(content)]
def expand_protocol(protocol):
protocol["steps"] = [expand_step(x) for x in protocol["steps"]]
return protocol
def expand_step(step):
arrays = fuzzer_repeater_perform(find_repeaters(step, "", {}))
if arrays:
return generate_new_sub_steps(step, class_to_value(arrays_to_matrix(arrays)), arrays)
else:
# Minimum 1 step in array
return [step]
def generate_random_token(length=10):
return "".join(random.choice(string.ascii_lowercase) for _ in range(length))
def now(format="%Y-%m-%d %H:%M:%S"):
"""
get now date and time
Args:
format: the date and time model, default is "%Y-%m-%d %H:%M:%S"
Returns:
the date and time of now
"""
return datetime.datetime.now().strftime(format)
def select_maximum_cpu_core(mode):
cpu_count = multiprocessing.cpu_count()
if cpu_count - 1 == 0:
return 1
mode_core_mapping = {
"maximum": cpu_count - 1,
"high": cpu_count / 2,
"normal": cpu_count / 4,
"low": cpu_count / 8,
}
rounded = math.ceil if mode == "high" else math.floor
return int(max((rounded(mode_core_mapping.get(mode, 1)), 1)))
def sort_dictionary(dictionary):
etc_flag = "..." in dictionary
if etc_flag:
del dictionary["..."]
sorted_dictionary = {}
for key in sorted(dictionary):
sorted_dictionary[key] = dictionary[key]
if etc_flag:
sorted_dictionary["..."] = {}
return sorted_dictionary

View File

@ -0,0 +1,13 @@
import datetime
def now(format="%Y-%m-%d %H:%M:%S"):
"""
get now date and time
Args:
format: the date and time model, default is "%Y-%m-%d %H:%M:%S"
Returns:
the date and time of now
"""
return datetime.datetime.now().strftime(format)

View File

View File

@ -1,26 +1,17 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import json
import time
from flask import jsonify
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from database.models import (HostsLog,
Report,
TempEvents)
from core.alert import warn
from core.alert import verbose_info
from core.alert import messages
from api.api_core import structure
from config import nettacker_database_config
DB = nettacker_database_config()["DB"]
USER = nettacker_database_config()["USERNAME"]
PASSWORD = nettacker_database_config()["PASSWORD"]
HOST = nettacker_database_config()["HOST"]
PORT = nettacker_database_config()["PORT"]
DATABASE = nettacker_database_config()["DATABASE"]
from nettacker import logger
from nettacker.api.helpers import structure
from nettacker.config import Config
from nettacker.core.messages import messages
from nettacker.database.models import HostsLog, Report, TempEvents
config = Config()
log = logger.get_logger()
def db_inputs(connection_type):
@ -34,10 +25,13 @@ def db_inputs(connection_type):
Returns:
corresponding command to connect to the db
"""
context = Config.db.as_dict()
return {
"postgres": 'postgres+psycopg2://{0}:{1}@{2}:{3}/{4}'.format(USER, PASSWORD, HOST, PORT, DATABASE),
"mysql": 'mysql://{0}:{1}@{2}:{3}/{4}'.format(USER, PASSWORD, HOST, PORT, DATABASE),
"sqlite": 'sqlite:///{0}'.format(DATABASE)
"postgres": "postgres+psycopg2://{username}:{password}@{host}:{port}/{name}".format(
**context
),
"mysql": "mysql://{username}:{password}@{host}:{port}/{name}".format(**context),
"sqlite": "sqlite:///{name}".format(**context),
}[connection_type]
@ -48,25 +42,20 @@ def create_connection():
Returns:
connection if success otherwise False
"""
try:
db_engine = create_engine(
db_inputs(DB),
connect_args={
'check_same_thread': False
},
pool_pre_ping=True
)
Session = sessionmaker(bind=db_engine)
session = Session()
return session
except Exception:
warn(messages("database_connect_fail"))
return False
db_engine = create_engine(
db_inputs(Config.db.engine),
connect_args={"check_same_thread": False},
pool_pre_ping=True,
)
Session = sessionmaker(bind=db_engine)
return Session()
def send_submit_query(session):
"""
a function to send submit based queries to db (such as insert and update or delete), it retries 100 times if
a function to send submit based queries to db
(such as insert and update or delete), it retries 100 times if
connection returned an error.
Args:
@ -82,15 +71,16 @@ def send_submit_query(session):
return True
except Exception:
time.sleep(0.1)
except Exception as _:
warn(messages("database_connect_fail"))
except Exception:
log.warn(messages("database_connect_fail"))
return False
return False
def submit_report_to_db(event):
"""
this function created to submit the generated reports into db, the files are not stored in db, just the path!
this function created to submit the generated reports into db, the
files are not stored in db, just the path!
Args:
event: event log
@ -98,15 +88,13 @@ def submit_report_to_db(event):
Returns:
return True if submitted otherwise False
"""
verbose_info(messages("inserting_report_db"))
log.verbose_info(messages("inserting_report_db"))
session = create_connection()
session.add(
Report(
date=event["date"],
scan_unique_id=event["scan_unique_id"],
report_path_filename=json.dumps(
event["options"]["report_path_filename"]
),
scan_id=event["scan_id"],
report_path_filename=event["options"]["report_path_filename"],
options=json.dumps(event["options"]),
)
)
@ -115,7 +103,8 @@ def submit_report_to_db(event):
def remove_old_logs(options):
"""
this function remove old events (and duplicated) from database based on target, module, scan_unique_id
this function remove old events (and duplicated)
from nettacker.database based on target, module, scan_id
Args:
options: identifiers
@ -127,7 +116,7 @@ def remove_old_logs(options):
session.query(HostsLog).filter(
HostsLog.target == options["target"],
HostsLog.module_name == options["module_name"],
HostsLog.scan_unique_id != options["scan_unique_id"]
HostsLog.scan_id != options["scan_id"],
).delete(synchronize_session=False)
return send_submit_query(session)
@ -149,15 +138,15 @@ def submit_logs_to_db(log):
target=log["target"],
date=log["date"],
module_name=log["module_name"],
scan_unique_id=log["scan_unique_id"],
scan_id=log["scan_id"],
port=json.dumps(log["port"]),
event=json.dumps(log["event"]),
json_event=json.dumps(log["json_event"])
json_event=json.dumps(log["json_event"]),
)
)
return send_submit_query(session)
else:
warn(messages("invalid_json_type_to_db").format(log))
log.warn(messages("invalid_json_type_to_db").format(log))
return False
@ -178,27 +167,27 @@ def submit_temp_logs_to_db(log):
target=log["target"],
date=log["date"],
module_name=log["module_name"],
scan_unique_id=log["scan_unique_id"],
scan_id=log["scan_id"],
event_name=log["event_name"],
port=json.dumps(log["port"]),
event=json.dumps(log["event"]),
data=json.dumps(log["data"])
data=json.dumps(log["data"]),
)
)
return send_submit_query(session)
else:
warn(messages("invalid_json_type_to_db").format(log))
log.warn(messages("invalid_json_type_to_db").format(log))
return False
def find_temp_events(target, module_name, scan_unique_id, event_name):
def find_temp_events(target, module_name, scan_id, event_name):
"""
select all events by scan_unique id, target, module_name
Args:
target: target
module_name: module name
scan_unique_id: unique scan identifier
scan_id: unique scan identifier
event_name: event_name
Returns:
@ -208,44 +197,52 @@ def find_temp_events(target, module_name, scan_unique_id, event_name):
try:
for _ in range(1, 100):
try:
return session.query(TempEvents).filter(
TempEvents.target == target,
TempEvents.module_name == module_name,
TempEvents.scan_unique_id == scan_unique_id,
TempEvents.event_name == event_name
).first()
return (
session.query(TempEvents)
.filter(
TempEvents.target == target,
TempEvents.module_name == module_name,
TempEvents.scan_id == scan_id,
TempEvents.event_name == event_name,
)
.first()
)
except Exception:
time.sleep(0.1)
except Exception as _:
warn(messages("database_connect_fail"))
except Exception:
log.warn(messages("database_connect_fail"))
return False
return False
def find_events(target, module_name, scan_unique_id):
def find_events(target, module_name, scan_id):
"""
select all events by scan_unique id, target, module_name
Args:
target: target
module_name: module name
scan_unique_id: unique scan identifier
scan_id: unique scan identifier
Returns:
an array with JSON events or an empty array
"""
session = create_connection()
return session.query(HostsLog).filter(
HostsLog.target == target,
HostsLog.module_name == module_name,
HostsLog.scan_unique_id == scan_unique_id
).all()
return (
session.query(HostsLog)
.filter(
HostsLog.target == target,
HostsLog.module_name == module_name,
HostsLog.scan_id == scan_id,
)
.all()
)
def select_reports(page):
"""
this function created to crawl into submitted results, it shows last 10 results submitted in the database.
this function created to crawl into submitted results,
it shows last 10 results submitted in the database.
you may change the page (default 1) to go to next/previous page.
Args:
@ -257,16 +254,16 @@ def select_reports(page):
selected = []
session = create_connection()
try:
search_data = session.query(Report).order_by(
Report.id.desc()
).offset((page * 10) - 10).limit(10)
search_data = (
session.query(Report).order_by(Report.id.desc()).offset((page * 10) - 10).limit(10)
)
for data in search_data:
tmp = {
"id": data.id,
"date": data.date,
"scan_unique_id": data.scan_unique_id,
"scan_id": data.scan_id,
"report_path_filename": data.report_path_filename,
"options": json.loads(data.options)
"options": json.loads(data.options),
}
selected.append(tmp)
except Exception:
@ -285,20 +282,15 @@ def get_scan_result(id):
result file content (TEXT, HTML, JSON) if success otherwise and error in JSON type.
"""
session = create_connection()
try:
try:
filename = session.query(Report).filter_by(id=id).first().report_path_filename[1:-1]
# for some reason filename saved like "filename" with double quotes in the beginning and end
return filename, open(str(filename), 'rb').read()
except Exception:
return jsonify(structure(status="error", msg="cannot find the file!")), 404
except Exception:
return jsonify(structure(status="error", msg="database error!")), 500
filename = session.query(Report).filter_by(id=id).first().report_path_filename
return filename, open(str(filename), "rb").read()
def last_host_logs(page):
"""
this function created to select the last 10 events from the database. you can goto next page by changing page value.
this function created to select the last 10 events from the database.
you can goto next page by changing page value.
Args:
page: page number
@ -312,43 +304,45 @@ def last_host_logs(page):
"target": host.target,
"info": {
"module_name": [
_.module_name for _ in session.query(HostsLog).filter(
HostsLog.target == host.target
).group_by(HostsLog.module_name).all()
_.module_name
for _ in session.query(HostsLog)
.filter(HostsLog.target == host.target)
.group_by(HostsLog.module_name)
.all()
],
"date": session.query(HostsLog).filter(
HostsLog.target == host.target
).order_by(
HostsLog.id.desc()
).first().date,
"date": session.query(HostsLog)
.filter(HostsLog.target == host.target)
.order_by(HostsLog.id.desc())
.first()
.date,
# "options": [ # unnecessary data?
# _.options for _ in session.query(HostsLog).filter(
# HostsLog.target == host.target
# ).all()
# ],
"events": [
_.event for _ in session.query(HostsLog).filter(
HostsLog.target == host.target
).all()
_.event
for _ in session.query(HostsLog).filter(HostsLog.target == host.target).all()
],
}
} for host in session.query(HostsLog).group_by(HostsLog.target).order_by(HostsLog.id.desc()).offset(
(
page * 10
) - 10
).limit(10)
},
}
for host in session.query(HostsLog)
.group_by(HostsLog.target)
.order_by(HostsLog.id.desc())
.offset((page * 10) - 10)
.limit(10)
]
if len(hosts) == 0:
return structure(status="finished", msg="No more search results")
return hosts
def get_logs_by_scan_unique_id(scan_unique_id):
def get_logs_by_scan_id(scan_id):
"""
select all events by scan id hash
Args:
scan_unique_id: scan id hash
scan_id: scan id hash
Returns:
an array with JSON events or an empty array
@ -356,7 +350,7 @@ def get_logs_by_scan_unique_id(scan_unique_id):
session = create_connection()
return [
{
"scan_unique_id": scan_unique_id,
"scan_id": scan_id,
"target": log.target,
"module_name": log.module_name,
"date": str(log.date),
@ -364,9 +358,7 @@ def get_logs_by_scan_unique_id(scan_unique_id):
"event": json.loads(log.event),
"json_event": log.json_event,
}
for log in session.query(HostsLog).filter(
HostsLog.scan_unique_id == scan_unique_id
).all()
for log in session.query(HostsLog).filter(HostsLog.scan_id == scan_id).all()
]
@ -386,7 +378,7 @@ def logs_to_report_json(target):
logs = session.query(HostsLog).filter(HostsLog.target == target)
for log in logs:
data = {
"scan_unique_id": log.scan_unique_id,
"scan_id": log.scan_id,
"target": log.target,
"port": json.loads(log.port),
"event": json.loads(log.event),
@ -408,55 +400,55 @@ def logs_to_report_html(target):
Returns:
HTML report
"""
from core.graph import build_graph
from lib.html_log import log_data
from nettacker.core.graph import build_graph
from nettacker.lib.html_log import log_data
session = create_connection()
logs = [
{
"date": log.date,
"target": log.target,
"module_name": log.module_name,
"scan_unique_id": log.scan_unique_id,
"scan_id": log.scan_id,
"port": log.port,
"event": log.event,
"json_event": log.json_event
} for log in session.query(HostsLog).filter(
HostsLog.target == target
).all()
"json_event": log.json_event,
}
for log in session.query(HostsLog).filter(HostsLog.target == target).all()
]
html_graph = build_graph(
"d3_tree_v2_graph",
logs
)
html_graph = build_graph("d3_tree_v2_graph", logs)
html_content = log_data.table_title.format(
html_graph,
log_data.css_1,
'date',
'target',
'module_name',
'scan_unique_id',
'port',
'event',
'json_event'
"date",
"target",
"module_name",
"scan_id",
"port",
"event",
"json_event",
)
for event in logs:
html_content += log_data.table_items.format(
event['date'],
event["date"],
event["target"],
event['module_name'],
event['scan_unique_id'],
event['port'],
event['event'],
event['json_event']
event["module_name"],
event["scan_id"],
event["port"],
event["event"],
event["json_event"],
)
html_content += log_data.table_end + '<p class="footer">' + messages("nettacker_report") + '</p>'
html_content += (
log_data.table_end + '<p class="footer">' + messages("nettacker_report") + "</p>"
)
return html_content
def search_logs(page, query):
"""
search in events (host, date, port, module, category, description, username, password, scan_unique_id, scan_cmd)
search in events (host, date, port, module, category, description,
username, password, scan_id, scan_cmd)
Args:
page: page number
@ -468,17 +460,33 @@ def search_logs(page, query):
session = create_connection()
selected = []
try:
for host in session.query(HostsLog).filter(
for host in (
session.query(HostsLog)
.filter(
(HostsLog.target.like("%" + str(query) + "%"))
| (HostsLog.date.like("%" + str(query) + "%"))
| (HostsLog.module_name.like("%" + str(query) + "%"))
| (HostsLog.port.like("%" + str(query) + "%"))
| (HostsLog.event.like("%" + str(query) + "%"))
| (HostsLog.scan_unique_id.like("%" + str(query) + "%"))
).group_by(HostsLog.target).order_by(HostsLog.id.desc()).offset((page * 10) - 10).limit(10):
for data in session.query(HostsLog).filter(HostsLog.target == str(host.target)).group_by(
HostsLog.module_name, HostsLog.port, HostsLog.scan_unique_id, HostsLog.event
).order_by(HostsLog.id.desc()).all():
| (HostsLog.scan_id.like("%" + str(query) + "%"))
)
.group_by(HostsLog.target)
.order_by(HostsLog.id.desc())
.offset((page * 10) - 10)
.limit(10)
):
for data in (
session.query(HostsLog)
.filter(HostsLog.target == str(host.target))
.group_by(
HostsLog.module_name,
HostsLog.port,
HostsLog.scan_id,
HostsLog.event,
)
.order_by(HostsLog.id.desc())
.all()
):
n = 0
capture = None
for selected_data in selected:
@ -493,8 +501,8 @@ def search_logs(page, query):
"port": [],
"date": [],
"event": [],
"json_event": []
}
"json_event": [],
},
}
selected.append(tmp)
n = 0
@ -508,17 +516,11 @@ def search_logs(page, query):
if data.date not in selected[capture]["info"]["date"]:
selected[capture]["info"]["date"].append(data.date)
if data.port not in selected[capture]["info"]["port"]:
selected[capture]["info"]["port"].append(
json.loads(data.port)
)
selected[capture]["info"]["port"].append(json.loads(data.port))
if data.event not in selected[capture]["info"]["event"]:
selected[capture]["info"]["event"].append(
json.loads(data.event)
)
selected[capture]["info"]["event"].append(json.loads(data.event))
if data.json_event not in selected[capture]["info"]["json_event"]:
selected[capture]["info"]["json_event"].append(
json.loads(data.json_event)
)
selected[capture]["info"]["json_event"].append(json.loads(data.json_event))
except Exception:
return structure(status="error", msg="database error!")
if len(selected) == 0:

View File

@ -1,24 +1,20 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from sqlalchemy import Column, Integer, Text, DateTime
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import (Column,
Integer,
Text,
DateTime)
Base = declarative_base()
class Report(Base):
"""
This class defines the table schema of the reports table. Any changes to the reports table need to be done here.
This class defines the table schema of the reports table. Any changes
to the reports table need to be done here.
"""
__tablename__ = 'reports'
__tablename__ = "reports"
id = Column(Integer, primary_key=True, autoincrement=True)
date = Column(DateTime)
scan_unique_id = Column(Text)
scan_id = Column(Text)
report_path_filename = Column(Text)
options = Column(Text)
@ -26,25 +22,24 @@ class Report(Base):
"""
returns a printable representation of the object of the class Report
"""
return "<Report(id={0}, scan_unique_id={1}, date={2}, report_path_filename={3})>".format(
self.id,
self.scan_unique_id,
self.date,
self.report_path_filename
return "<Report(id={0}, scan_id={1}, date={2}, report_path_filename={3})>".format(
self.id, self.scan_id, self.date, self.report_path_filename
)
class TempEvents(Base):
"""
This class defines the table schema of the reports table. Any changes to the reports table need to be done here.
This class defines the table schema of the reports table. Any changes to
the reports table need to be done here.
"""
__tablename__ = 'temp_events'
__tablename__ = "temp_events"
id = Column(Integer, primary_key=True, autoincrement=True)
date = Column(DateTime)
target = Column(Text)
module_name = Column(Text)
scan_unique_id = Column(Text)
scan_id = Column(Text)
event_name = Column(Text)
port = Column(Text)
event = Column(Text)
@ -54,32 +49,34 @@ class TempEvents(Base):
"""
returns a printable representation of the object of the class Report
"""
return '''
<scan_events(id={0}, target={1}, date={2}, module_name={3}, scan_unqiue_id={4},
return """
<scan_events(id={0}, target={1}, date={2}, module_name={3}, scan_unqiue_id={4},
port={5}, event={6}, data={7})>
'''.format(
""".format(
self.id,
self.target,
self.date,
self.module_name,
self.scan_unique_id,
self.scan_id,
self.port,
self.event,
self.data
self.data,
)
class HostsLog(Base):
"""
This class defines the table schema of the hosts_log table. Any changes to the reports hosts_log need to be done here.
This class defines the table schema of the hosts_log table.
Any changes to the reports hosts_log need to be done here.
"""
__tablename__ = 'scan_events'
__tablename__ = "scan_events"
id = Column(Integer, primary_key=True, autoincrement=True)
date = Column(DateTime)
target = Column(Text)
module_name = Column(Text)
scan_unique_id = Column(Text)
scan_id = Column(Text)
port = Column(Text)
event = Column(Text)
json_event = Column(Text)
@ -88,16 +85,16 @@ class HostsLog(Base):
"""
returns a printable representation of the object of the class HostsLog
"""
return '''
<scan_events(id={0}, target={1}, date={2}, module_name={3}, scan_unqiue_id={4},
return """
<scan_events(id={0}, target={1}, date={2}, module_name={3}, scan_unqiue_id={4},
port={5}, event={6}, json_event={7})>
'''.format(
""".format(
self.id,
self.target,
self.date,
self.module_name,
self.scan_unique_id,
self.scan_id,
self.port,
self.event,
self.json_event
self.json_event,
)

View File

@ -0,0 +1,36 @@
from sqlalchemy import create_engine
from nettacker.config import Config
from nettacker.database.models import Base
def mysql_create_database():
"""
when using mysql database, this is the function that is used to create the
database for the first time when you run the nettacker module.
"""
engine = create_engine(
"mysql://{username}:{password}@{host}:{port}".format(**Config.db.as_dict())
)
existing_databases = engine.execute("SHOW DATABASES;")
existing_databases = [d[0] for d in existing_databases]
if Config.db.name not in existing_databases:
engine.execute("CREATE DATABASE {0} ".format(Config.db.name))
def mysql_create_tables():
"""
when using mysql database, this is the function that is used to create the
tables in the database for the first time when you run the nettacker module.
Args:
None
Returns:
True if success otherwise False
"""
db_engine = create_engine(
"mysql://{username}:{password}@{host}:{port}/{name}".format(**Config.db.as_dict())
)
Base.metadata.create_all(db_engine)

View File

@ -0,0 +1,34 @@
from sqlalchemy import create_engine
from sqlalchemy.exc import OperationalError
from nettacker.config import Config
from nettacker.database.models import Base
def postgres_create_database():
"""
when using postgres database, this is the function that is used to
create the database for the first time when you the nettacker run module.
"""
try:
engine = create_engine(
"postgres+psycopg2://{username}:{password}@{host}:{port}/{name}".format(
**Config.db.as_dict()
)
)
Base.metadata.create_all(engine)
except OperationalError:
# if the database does not exist
engine = create_engine("postgres+psycopg2://postgres:postgres@localhost/postgres")
conn = engine.connect()
conn.execute("commit")
conn.execute(f"CREATE DATABASE {Config.db.name}")
conn.close()
engine = create_engine(
"postgres+psycopg2://{username}:{password}@{host}:{port}/{name}".format(
**Config.db.as_dict()
)
)
Base.metadata.create_all(engine)

View File

@ -1,8 +1,8 @@
OWASP Nettacker Database Files
=======================
This folder mainly contains all the files which handle the database transactions for the OWASP Nettacker.
* `db.py` contains the database transaction functions
* `models.py` contains the database structure layout
* `mysql_create.py` contains functions to create the db structure mentioned in `models.py` into a MySQL database
* `sqlite_create.py` contains functions to create the db structure mentioned in `models.py` into a SQLite database
OWASP Nettacker Database Files
=======================
This folder mainly contains all the files which handle the database transactions for the OWASP Nettacker.
* `db.py` contains the database transaction functions
* `models.py` contains the database structure layout
* `mysql_create.py` contains functions to create the db structure mentioned in `models.py` into a MySQL database
* `sqlite_create.py` contains functions to create the db structure mentioned in `models.py` into a SQLite database

View File

@ -0,0 +1,17 @@
from sqlalchemy import create_engine
from nettacker.config import Config
from nettacker.database.models import Base
def sqlite_create_tables():
"""
when using sqlite database, this is the function that is used to create
the database schema for the first time when you run the nettacker module.
"""
db_engine = create_engine(
"sqlite:///{name}".format(**Config.db.as_dict()),
connect_args={"check_same_thread": False},
)
Base.metadata.create_all(db_engine)

View File

View File

View File

@ -0,0 +1,49 @@
import json
from nettacker.config import Config
from nettacker.core.messages import messages
def start(events):
"""
generate the d3_tree_v1_graph with events
Args:
events: all events
Returns:
a graph in HTML
"""
# define a normalised_json
normalisedjson = {"name": "Started attack", "children": {}}
# get data for normalised_json
for event in events:
if event["target"] not in normalisedjson["children"]:
normalisedjson["children"].update({event["target"]: {}})
normalisedjson["children"][event["target"]].update({event["module_name"]: []})
if event["module_name"] not in normalisedjson["children"][event["target"]]:
normalisedjson["children"][event["target"]].update({event["module_name"]: []})
normalisedjson["children"][event["target"]][event["module_name"]].append(
f"target: {event['target']}, module_name: {event['module_name']}, port: "
f"{event['port']}, event: {event['event']}"
)
# define a d3_structure_json
d3_structure = {"name": "Starting attack", "children": []}
# get data for normalised_json
for target in list(normalisedjson["children"].keys()):
for module_name in list(normalisedjson["children"][target].keys()):
for description in normalisedjson["children"][target][module_name]:
children_array = [{"name": module_name, "children": [{"name": description}]}]
d3_structure["children"].append({"name": target, "children": children_array})
data = (
open(Config.path.web_static_dir / "report/d3_tree_v1.html")
.read()
.replace("__data_will_locate_here__", json.dumps(d3_structure))
.replace("__title_to_replace__", messages("pentest_graphs"))
.replace("__description_to_replace__", messages("graph_message"))
.replace("__html_title_to_replace__", messages("nettacker_report"))
)
return data

View File

@ -1,10 +1,3 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import string
import random
import json
def start(events):
"""
generate the d3_tree_v2_graph with events (using d3_tree_v1_graph)
@ -15,10 +8,11 @@ def start(events):
Returns:
a graph in HTML
"""
from lib.graph.d3_tree_v1.engine import start
from nettacker.lib.graph.d3_tree_v1.engine import start
return start(events).replace(
'''\t root.children.forEach(function(child){
"""\t root.children.forEach(function(child){
collapse(child);
\t });''',
''
\t });""",
"",
)

Some files were not shown because too many files have changed in this diff Show More