<![CDATA[pwnresist(ad3da)....Tech Space]]>http://localhost:2368/http://localhost:2368/favicon.pngpwnresist(ad3da)....Tech Spacehttp://localhost:2368/Ghost 5.63Sun, 17 Sep 2023 10:27:45 GMT60<![CDATA[Simplifying Data Imports in Django with Python: The ImportFileOperation Class]]>In Django applications, there often comes a time when you need to handle the bulk import of data, such as importing data from an Excel spreadsheet. To make it easier, there's an ImportFileOperation. This class is designed to handle the import of data from an uploaded file and

]]>
http://localhost:2368/untitled/6506c91ebd32bd40bf71b594Sun, 17 Sep 2023 09:55:42 GMT

In Django applications, there often comes a time when you need to handle the bulk import of data, such as importing data from an Excel spreadsheet. To make it easier, there's an ImportFileOperation. This class is designed to handle the import of data from an uploaded file and save it to the appropriate Django models.

Importing Required Libraries

import json
import pandas as pd
from django.apps import apps
from django.core.files.uploadedfile import UploadedFile
from django.db import transaction
from distributor.models import Distributor
from distributor.models import DistUsers
from distributor.models.distributor import RetailerProfile
Importing Required Libraries
  • apps: Part of Django, used to get access to models.
  • json: Python's built-in library for working with JSON data.
  • pandas:  powerful library for data manipulation and analysis.
  • UploadedFile: Django class for handling uploaded files.
  • transaction: Django's transaction management for database operations.

Class Initialization

class ImportFileOperation:
    def __init__(self, uploaded_file, user_instance=None, distributor_instance=None, **kwargs) -> None:
        self.uploaded_file = uploaded_file
        self.user_instance = user_instance
        self.distributor_instance = distributor_instance
        self.excel = None

        if user_instance is not None:
            self.user_id = user_instance.id

        if distributor_instance is not None:
            self.distributor_id = distributor_instance.id

        self.check_file()
Class Initialization


class constructor (__init__), the ImportFileOperation class takes several parameters:

  • uploaded_file: (an Excel spreadsheet) to be processed.
  • user_instance: An optional user instance.
  • distributor_instance: An optional distributor instance.
  • **kwargs: Additional keyword arguments.

The constructor initializes the class attributes, including the uploaded file (uploaded_file), user instance (user_instance), distributor instance (distributor_instance), and an attribute called excel to store the parsed Excel data.

File Checking

check_file method verifies whether a file has been uploaded.

    def check_file(self):
        if self.uploaded_file is None:
            return "File upload not found", False
        return None
File Checking

Reading and Parsing Excel Data

read_file method reads and parses the Excel data based on the provided model_name:

    def read_file(self, model_name):
        file_status = self.check_file()
        print("file_status", file_status)
        if isinstance(file_status, str):
            return file_status

        print("self.uploaded_file", self.uploaded_file)
        excel = pd.read_excel(self.uploaded_file)
        print("excel records read file", excel)

        self.excel = excel
        print("EXCEL", excel)
        return self.write_to_model(model_name)
Reading and Parsing Excel Data

It first checks if a file exists using the check_file method.

  • If the file is found, it reads the Excel data into a Pandas DataFrame (excel).
  • The parsed Excel data is stored in the excel attribute for later use.
  • It then calls the write_to_model method to save the data to the appropriate Django model based on the model_name

Saving Data to Django Models

    def write_to_model(self, model_name):
        self.model = apps.get_model(app_label="distributor", model_name=model_name)
        if self.excel is None:
            return "Failed to read the Excel file", False

        if model_name == "PCategory":
            return self.save_category_data()
        elif model_name == "Product":
            return self.save_product_data()
        elif model_name == "SalesMan":
            return self.save_salesman_data()
        elif model_name == "Retailer":
            return self.save_retailer_data()
        elif model_name == "Brand":
            return self.save_brand_data()

        return "Model for saving not implemented", False
Saving Data to Django Models

write_to_model

  • uses apps.get_model to dynamically fetch the Django model based on the provided model_name.
  • If the excel attribute is None , it returns an error message.
  • Depending on the model_name, it calls specific methods (e.g., save_category_data) to handle the data-saving logic for that model.

Saving Category Data

    def save_category_data(self):
        try:
            with transaction.atomic():
                for index, row in self.excel.iterrows():
                    category, created = self.model.objects.get_or_create(
                        distributor_id=self.distributor_id,
                        name=row['CATEGORY'],
                        defaults={
                            'brief_description': row['DESCRIPTION'],
                        }
                    )
        except Exception as e:
            return f"Error saving category: {str(e)}", False

        return "Category saved successfully", True
Saving Category Data

The method uses a transaction to ensure data consistency. It iterates through the parsed Excel data and uses get_or_create to either retrieve an existing PCategory object or create a new one based on the provided attributes.

[A transaction is a database operation unit that ensures data integrity, consistency, and reliability. It follows ACID properties, guaranteeing that all its actions are atomic, consistent, isolated from other transactions, and durable after completion.]


]]>
<![CDATA[Implementing the Excel File Preview Component]]>In this blog post, we'll take a deep dive into a React component -->ExcelFilePreview designed to handle and preview Excel files, enabling users to preview Excel data before deciding to upload it. We'll dissect the code step by step to understand its functionality.

Introduction

]]>
http://localhost:2368/implementing-the-excel-file-preview-component/6506c487bd32bd40bf71b50eSun, 17 Sep 2023 09:36:04 GMT

In this blog post, we'll take a deep dive into a React component -->ExcelFilePreview designed to handle and preview Excel files, enabling users to preview Excel data before deciding to upload it. We'll dissect the code step by step to understand its functionality.

Introduction

The ExcelFilePreview component is built using React and relies on the xlsx library for Excel file handling. It provides users with a modal interface that displays the contents of an Excel file, empowering them to preview the data before making an upload decision. Let's explore the code that makes this happen.

Importing Dependencies

import React, { useState, useEffect } from "react";
import XLSX from "xlsx";
import Modal from "@mui/material/Modal";
import Box from "@mui/material/Box";
import Paper from "@mui/material/Paper";
import Table from "@mui/material/Table";
import TableBody from "@mui/material/TableBody";
import TableCell from "@mui/material/TableCell";
import TableContainer from "@mui/material/TableContainer";
import TableHead from "@mui/material/TableHead";
import TableRow from "@mui/material/TableRow";
import Button from "@mui/material/Button";
import "./ExcelFilePreview.css"; // You can create a CSS file for styling
importing dependencies

Component Definition

export default function ExcelFilePreview({ file, onClose, uploadFunction }) {
  const [excelData, setExcelData] = useState(null);
  const [loading, setLoading] = useState(true);
  const [progress, setProgress] = useState(0);
}
Component Definition

The ExcelFilePreview component is defined as a functional React component that accepting the props: file, onClose, and uploadFunction.

  • file: Represents the Excel file that users want to preview.
  • onClose: A callback function to close the modal.
  • uploadFunction: A function responsible for uploading the selected Excel file from other components.

State Management

  const [excelData, setExcelData] = useState(null);
  const [loading, setLoading] = useState(true);
  const [progress, setProgress] = useState(0);
State Management
  • excelData: state variable holds the data extracted from the Excel file.
  • loading: indicates whether the file is still being processed.
  • progress: This variable represents the progress of processing the file, ranging from 0 to 100%.(counter++)

Handling File Processing with useEffect

  useEffect(() => {
    if (file) {
      const reader = new FileReader();
      reader.onload = (e) => {
        const data = new Uint8Array(e.target.result);
        const workbook = XLSX.read(data, { type: "array" });
        const sheetName = workbook.SheetNames[0];
        const sheet = workbook.Sheets[sheetName];
        const dataJson = XLSX.utils.sheet_to_json(sheet, { header: 1 });
        setExcelData(dataJson);

        for (let i = 1; i <= 100; i++) {
          setTimeout(() => {
            setProgress(i);
          }, i * 10);
        }

        setTimeout(() => {
          setLoading(false);
        }, 1000);
      };
      reader.readAsArrayBuffer(file);
    }
  }, [file]);
Handling File Processing with useEffect

useEffect hook, component that handles the processing of the Excel file.

  1. FileReader - created to read the content of the Excel file.
  2. When the reader finishes loading the file, it converts the data into a Uint8Array.
  3. The xlsx library is used to parse the Excel data, extract the first sheet, and convert it into a JSON format.
  4. The extracted data is stored in the excelData state variable.
  5. During processing, a loading animation is displayed, and the progress variable is updated incrementally to reflect the file processing progress.
  6. After processing, the loading state is set to false.

Handling Form Submission

  const handleSubmit = () => {
    if (excelData) {
      uploadFunction(file);
      onClose();
    }
  };
Handling Form Submission


handleSubmit function is called when the user decides to upload the Excel file. It checks if excelData contains data, indicating that the file has been processed. If data is available, it triggers the uploadFunction to upload the file and closes the modal using the onClose callback.

Displaying the Modal

  return (
    <Modal open={true} onClose={onClose}>
      <Box
        sx={{
          position: "absolute",
          top: "50%",
          left: "50%",
          transform: "translate(-50%, -50%)",
          width: "80%",
          maxHeight: "80%",
          bgcolor: "background.paper",
          boxShadow: 24,
          p: 4,
          display: "flex",
          flexDirection: "column",
        }}
      >
        <h2 style={{ fontSize: '24px', color: 'green' }}>FileName - {fileName}</h2>
        {loading ? (
          <div style={{ textAlign: "center" }}>
            <p>Loading... {progress}%</p>
            <div className="loader"></div>
          </div>
        ) : (
          <TableContainer
            component={Paper}
            sx={{
              flex: 1,
              overflowY: "auto",
              "& .MuiTableCell-root": {
                borderBottom: "1px solid #e0e0e0",
              },
            }}
          >
            <Table>
              <TableHead>
                <TableRow>
                  {excelData &&
                    excelData[0].map((cell, index) => (
                      <TableCell
                        key={index}
                        style={{
                          position: "sticky",
                          top: 0,
                          background: "grey",
                          zIndex: 1,
                          fontWeight: "bold",
                        }}
                      >
                        {cell}
                      </TableCell>
                    ))}
                </TableRow>
              </TableHead>
              <TableBody>
                {excelData &&
                  excelData.slice(1).map((row, rowIndex) => (
                    <TableRow key={rowIndex}>
                      {row.map((cell, cellIndex) => (
                        <TableCell key={cellIndex}>{cell}</TableCell>
                      ))}
                    </TableRow>
                  ))}
              </TableBody>
            </Table>
          </TableContainer>
        )}
        <div style={{ textAlign: "center", marginTop: "16px" }}>
          <Button
            variant="contained"
            onClick={handleSubmit}
            style={{ marginRight: "16px" }}
          >
            Upload File
          </Button>
          <Button variant="contained" onClick={onClose}>
            Close
          </Button>
        </div>
      </Box>
    </Modal>
  );
Displaying the Modal


The above code is defined with the return statement

  • It displays the file name without the .xlsx extension(in the fileName heading).
  • During processing, a loading spinner and progress percentage are shown.
  • Once processing is complete, a table displays the Excel data.
  • Buttons are provided for uploading the file and closing the modal.
]]>
<![CDATA[External Testing]]>External Testing
External Testing during a pentest engagement

External Information Gathering

Start with a quick initial Nmap scan against our target to get a lay of the land and see what we're dealing with.

sudo nmap --open -oA external_ept_tcp_1k -iL scope 

Starting Nmap 7.92 ( https://nmap.
]]>
http://localhost:2368/external-testing/6506c2e2bd32bd40bf71b4fcSun, 17 Sep 2023 09:12:47 GMTExternal Testing
External Testing during a pentest engagement

External Information Gathering

External Testing

Start with a quick initial Nmap scan against our target to get a lay of the land and see what we're dealing with.

sudo nmap --open -oA external_ept_tcp_1k -iL scope 

Starting Nmap 7.92 ( https://nmap.org ) at 2022-06-20 14:56 EDT
Nmap scan report for 10.129.203.101
Host is up (0.12s latency).
Not shown: 989 closed tcp ports (reset)
PORT     STATE SERVICE
21/tcp   open  ftp
22/tcp   open  ssh
25/tcp   open  smtp
53/tcp   open  domain
80/tcp   open  http
110/tcp  open  pop3
111/tcp  open  rpcbind
143/tcp  open  imap
993/tcp  open  imaps
995/tcp  open  pop3s
8080/tcp open  http-proxy

Nmap done: 1 IP address (1 host up) scanned in 2.25 seconds

We notice 11 ports open from our quick top 1,000 port TCP scan. It seems that we are dealing with a web server that is also running some additional services such as FTP, SSH, email (SMTP, pop3, and IMAP), DNS, and at least two web application-related ports.

Running a full nmap scan:

sudo nmap --open -p- -A -oA external_ept_tcp_all_svc -iL scope

Starting Nmap 7.92 ( https://nmap.org ) at 2022-06-20 15:27 EDT
Nmap scan report for 10.129.203.101
Host is up (0.12s latency).
Not shown: 65524 closed tcp ports (reset)
PORT     STATE SERVICE  VERSION
21/tcp   open  ftp      vsftpd 3.0.3
| ftp-anon: Anonymous FTP login allowed (FTP code 230)
|_-rw-r--r--    1 0        0              38 May 30 17:16 flag.txt
| ftp-syst: 
|   STAT: 
| FTP server status:
|      Connected to ::ffff:10.10.14.15
|      Logged in as ftp
|      TYPE: ASCII
|      No session bandwidth limit
|      Session timeout in seconds is 300
|      Control connection is plain text
|      Data connections will be plain text
|      At session startup, client count was 1
|      vsFTPd 3.0.3 - secure, fast, stable
|_End of status
22/tcp   open  ssh      OpenSSH 8.2p1 Ubuntu 4ubuntu0.5 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey: 
|   3072 71:08:b0:c4:f3:ca:97:57:64:97:70:f9:fe:c5:0c:7b (RSA)
|   256 45:c3:b5:14:63:99:3d:9e:b3:22:51:e5:97:76:e1:50 (ECDSA)
|_  256 2e:c2:41:66:46:ef:b6:81:95:d5:aa:35:23:94:55:38 (ED25519)
25/tcp   open  smtp     Postfix smtpd
|_ssl-date: TLS randomness does not represent time
| ssl-cert: Subject: commonName=ubuntu
| Subject Alternative Name: DNS:ubuntu
| Not valid before: 2022-05-30T17:15:40
|_Not valid after:  2032-05-27T17:15:40
|_smtp-commands: ubuntu, PIPELINING, SIZE 10240000, VRFY, ETRN, STARTTLS, ENHANCEDSTATUSCODES, 8BITMIME, DSN, SMTPUTF8, CHUNKING
53/tcp   open  domain   
| fingerprint-strings: 
|   DNSVersionBindReqTCP: 
|     version
|     bind
| dns-nsid: 
|_  bind.version: 
80/tcp   open  http     Apache httpd 2.4.41 ((Ubuntu))
|_http-server-header: Apache/2.4.41 (Ubuntu)
|_http-title: Inlanefreight
110/tcp  open  pop3     Dovecot pop3d
|_ssl-date: TLS randomness does not represent time
| ssl-cert: Subject: commonName=ubuntu
| Subject Alternative Name: DNS:ubuntu
| Not valid before: 2022-05-30T17:15:40
|_Not valid after:  2032-05-27T17:15:40
|_pop3-capabilities: SASL TOP PIPELINING STLS RESP-CODES AUTH-RESP-CODE CAPA UIDL
111/tcp  open  rpcbind  2-4 (RPC #100000)
| rpcinfo: 
|   program version    port/proto  service
|   100000  2,3,4        111/tcp   rpcbind
|   100000  2,3,4        111/udp   rpcbind
|   100000  3,4          111/tcp6  rpcbind
|_  100000  3,4          111/udp6  rpcbind
143/tcp  open  imap     Dovecot imapd (Ubuntu)
|_imap-capabilities: LITERAL+ LOGIN-REFERRALS more Pre-login post-login ID capabilities listed have LOGINDISABLEDA0001 OK ENABLE IDLE STARTTLS SASL-IR IMAP4rev1
|_ssl-date: TLS randomness does not represent time
| ssl-cert: Subject: commonName=ubuntu
| Subject Alternative Name: DNS:ubuntu
| Not valid before: 2022-05-30T17:15:40
|_Not valid after:  2032-05-27T17:15:40
993/tcp  open  ssl/imap Dovecot imapd (Ubuntu)
|_ssl-date: TLS randomness does not represent time
| ssl-cert: Subject: commonName=ubuntu
| Subject Alternative Name: DNS:ubuntu
| Not valid before: 2022-05-30T17:15:40
|_Not valid after:  2032-05-27T17:15:40
|_imap-capabilities: LITERAL+ LOGIN-REFERRALS AUTH=PLAINA0001 post-login ID capabilities more have listed OK ENABLE IDLE Pre-login SASL-IR IMAP4rev1
995/tcp  open  ssl/pop3 Dovecot pop3d
| ssl-cert: Subject: commonName=ubuntu
| Subject Alternative Name: DNS:ubuntu
| Not valid before: 2022-05-30T17:15:40
|_Not valid after:  2032-05-27T17:15:40
|_ssl-date: TLS randomness does not represent time
|_pop3-capabilities: SASL(PLAIN) TOP PIPELINING CAPA RESP-CODES AUTH-RESP-CODE USER UIDL
8080/tcp open  http     Apache httpd 2.4.41 ((Ubuntu))
|_http-server-header: Apache/2.4.41 (Ubuntu)
| http-open-proxy: Potentially OPEN proxy.
|_Methods supported:CONNECTION
|_http-title: Support Center
1 service unrecognized despite returning data. If you know the service/version, please submit the following fingerprint at https://nmap.org/cgi-bin/submit.cgi?new-service :
SF-Port53-TCP:V=7.92%I=7%D=6/20%Time=62B0CA68%P=x86_64-pc-linux-gnu%r(DNSV
SF:ersionBindReqTCP,39,"\x007\0\x06\x85\0\0\x01\0\x01\0\0\0\0\x07version\x
SF:04bind\0\0\x10\0\x03\xc0\x0c\0\x10\0\x03\0\0\0\0\0\r\x0c");
No exact OS matches for host (If you know what OS is running on it, see https://nmap.org/submit/ ).
TCP/IP fingerprint:
OS:SCAN(V=7.92%E=4%D=6/20%OT=21%CT=1%CU=36505%PV=Y%DS=2%DC=T%G=Y%TM=62B0CA8
OS:8%P=x86_64-pc-linux-gnu)SEQ(SP=104%GCD=1%ISR=10B%TI=Z%CI=Z%II=I%TS=A)OPS
OS:(O1=M505ST11NW7%O2=M505ST11NW7%O3=M505NNT11NW7%O4=M505ST11NW7%O5=M505ST1
OS:1NW7%O6=M505ST11)WIN(W1=FE88%W2=FE88%W3=FE88%W4=FE88%W5=FE88%W6=FE88)ECN
OS:(R=Y%DF=Y%T=40%W=FAF0%O=M505NNSNW7%CC=Y%Q=)T1(R=Y%DF=Y%T=40%S=O%A=S+%F=A
OS:S%RD=0%Q=)T2(R=N)T3(R=N)T4(R=Y%DF=Y%T=40%W=0%S=A%A=Z%F=R%O=%RD=0%Q=)T5(R
OS:=Y%DF=Y%T=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=)T6(R=Y%DF=Y%T=40%W=0%S=A%A=Z%F
OS:=R%O=%RD=0%Q=)T7(R=Y%DF=Y%T=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=)U1(R=Y%DF=N%
OS:T=40%IPL=164%UN=0%RIPL=G%RID=G%RIPCK=G%RUCK=G%RUD=G)IE(R=Y%DFI=N%T=40%CD
OS:=S)

Network Distance: 2 hops
Service Info: Host:  ubuntu; OSs: Unix, Linux; CPE: cpe:/o:linux:linux_kernel

TRACEROUTE (using port 443/tcp)
HOP RTT       ADDRESS
1   116.63 ms 10.10.14.1
2   117.72 ms 10.129.203.101

OS and Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 84.91 seconds

The first thing we can see is that this is an Ubuntu host running an HTTP proxy of some kind. We can use this handy Nmap grep cheatsheet to "cut through the noise" and extract the most useful information from the scan. Let's pull out the running services and service numbers, so we have them handy for further investigation.

egrep -v "^#|Status: Up" external_ept_tcp_all_svc.gnmap | cut -d ' ' -f4- | tr ',' '\n' | \                                                               
sed -e 's/^[ \t]*//' | awk -F '/' '{print $7}' | grep -v "^$" | sort | uniq -c \
| sort -k 1 -nr

      2 Dovecot pop3d
      2 Dovecot imapd (Ubuntu)
      2 Apache httpd 2.4.41 ((Ubuntu))
      1 vsftpd 3.0.3
      1 Postfix smtpd
      1 OpenSSH 8.2p1 Ubuntu 4ubuntu0.5 (Ubuntu Linux; protocol 2.0)
      1 2-4 (RPC #100000)

From these listening services, there are several things we can try immediately, but since we see DNS is present, let's try a DNS Zone Transfer to see if we can enumerate any valid subdomains for further exploration and expand our testing scope. We know from the scoping sheet that the primary domain is EXTERNAL.LOCAL, so let's see what we can find.

dig axfr external.local @10.129.203.101

; <<>> DiG 9.16.27-Debian <<>> axfr inlanefreight.local @10.129.203.101
;; global options: +cmd
inlanefreight.local.	86400	IN	SOA	ns1.inlanfreight.local. dnsadmin.inlanefreight.local. 21 604800 86400 2419200 86400
inlanefreight.local.	86400	IN	NS	inlanefreight.local.
inlanefreight.local.	86400	IN	A	127.0.0.1
blog.inlanefreight.local. 86400	IN	A	127.0.0.1
careers.inlanefreight.local. 86400 IN	A	127.0.0.1
dev.inlanefreight.local. 86400	IN	A	127.0.0.1
gitlab.inlanefreight.local. 86400 IN	A	127.0.0.1
ir.inlanefreight.local.	86400	IN	A	127.0.0.1
status.inlanefreight.local. 86400 IN	A	127.0.0.1
support.inlanefreight.local. 86400 IN	A	127.0.0.1
tracking.inlanefreight.local. 86400 IN	A	127.0.0.1
vpn.inlanefreight.local. 86400	IN	A	127.0.0.1
inlanefreight.local.	86400	IN	SOA	ns1.inlanfreight.local. dnsadmin.inlanefreight.local. 21 604800 86400 2419200 86400
;; Query time: 116 msec
;; SERVER: 10.129.203.101#53(10.129.203.101)
;; WHEN: Mon Jun 20 16:28:20 EDT 2022
;; XFR size: 14 records (messages 1, bytes 448)

The zone transfer works, and we find 9 additional subdomains. In a real-world engagement, if a DNS Zone Transfer is not possible, we could enumerate subdomains in many ways. The DNSDumpster.com website is a quick bet.

If DNS were not in play, we could also perform vhost enumeration using a tool such as ffuf. Let's try it here to see if we find anything else that the zone transfer missed. We'll use a dictionary list to help us.

To fuzz vhosts, we must first figure out what the response looks like for a non-existent vhost. We can choose anything we want here; we just want to provoke a response, so we should choose something that very likely does not exist.

curl -s -I http://10.129.203.101 -H "HOST: defnotvalid.external.local" | grep "Content-Length:"

Content-Length: 15157

Trying to specify defnotvalid in the host header gives us a response size of 15157. We can infer that this will be the same for any invalid vhost so let's work with ffuf, using the -fs flag to filter out responses with size 15157 since we know them to be invalid.

ffuf -w namelist.txt:FUZZ -u http://10.129.203.101/ -H 'Host:FUZZ.external.local' -fs 15157

        /'___\  /'___\           /'___\       
       /\ \__/ /\ \__/  __  __  /\ \__/       
       \ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\      
        \ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/      
         \ \_\   \ \_\  \ \____/  \ \_\       
          \/_/    \/_/   \/___/    \/_/       

       v1.4.1-dev
________________________________________________

 :: Method           : GET
 :: URL              : http://10.129.203.101/
 :: Wordlist         : FUZZ: namelist.txt
 :: Header           : Host: FUZZ.inlanefreight.local
 :: Follow redirects : false
 :: Calibration      : false
 :: Timeout          : 10
 :: Threads          : 40
 :: Matcher          : Response status: 200,204,301,302,307,401,403,405,500
 :: Filter           : Response size: 15157
________________________________________________

blog                    [Status: 200, Size: 8708, Words: 1509, Lines: 232, Duration: 143ms]
careers                 [Status: 200, Size: 51810, Words: 22044, Lines: 732, Duration: 153ms]
dev                     [Status: 200, Size: 2048, Words: 643, Lines: 74, Duration: 1262ms]
gitlab                  [Status: 302, Size: 113, Words: 5, Lines: 1, Duration: 226ms]
ir                      [Status: 200, Size: 28545, Words: 2888, Lines: 210, Duration: 1089ms]
<REDACTED>              [Status: 200, Size: 56, Words: 3, Lines: 4, Duration: 120ms]
status                  [Status: 200, Size: 917, Words: 112, Lines: 43, Duration: 126ms]
support                 [Status: 200, Size: 26635, Words: 11730, Lines: 523, Duration: 122ms]
tracking                [Status: 200, Size: 35185, Words: 10409, Lines: 791, Duration: 124ms]
vpn                     [Status: 200, Size: 1578, Words: 414, Lines: 35, Duration: 121ms]
:: Progress: [151265/151265] :: Job [1/1] :: 341 req/sec :: Duration: [0:07:33] :: Errors: 0 ::

Comparing the results, we see one vhost that was not part of the results from the DNS Zone Transfer we performed.


Enumeration Results

From our initial enumeration, we noticed several interesting ports open that we will probe further in the next section. We also gathered several subdomains/vhosts. Let's add these to our /etc/hosts file so we can investigate each further.

sudo tee -a /etc/hosts > /dev/null <<EOT

## external hosts 
10.129.203.101 external.local blog.external.local careers.external.local dev.external.local gitlab.external.local ir.external.local status.external.local support.external.local tracking.external.local vpn.external.local
EOT

Service Enumeration & Exploitation


Listening Services

Our Nmap scans uncovered a few interesting services:

  • Port 21: FTP
  • Port 22: SSH
  • Port 25: SMTP
  • Port 53: DNS
  • Port 80: HTTP
  • Ports 110/143/993/995: imap & pop3
  • Port 111: rpcbind

We already performed a DNS Zone Transfer during our initial information gathering, which yielded several subdomains that we'll dig into deeper later. Other DNS attacks aren't worth attempting in our current environment.


FTP

Let's start with FTP on port 21. The Nmap Aggressive Scan discovered that FTP anonymous login was possible. Let's confirm that manually.

ftp 10.129.203.101

Connected to 10.129.203.101.
220 (vsFTPd 3.0.3)
Name (10.129.203.101:tester): anonymous
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
200 PORT command successful. Consider using PASV.
150 Here comes the directory listing.
-rw-r--r--    1 0        0              38 May 30 17:16 flag.txt
226 Directory send OK.
ftp>

Connecting with the anonymous user and a blank password works. It does not look like we can access any interesting files besides one, and we also cannot change directories.

ftp> put test.txt 

local: test.txt remote: test.txt
200 PORT command successful. Consider using PASV.
550 Permission denied.

We are also unable to upload a file.

Other attacks, such as an FTP Bounce Attack, are unlikely, and we don't have any information about the internal network yet. Searching for public exploits for vsFTPd 3.0.3 only shows this PoC for a Remote Denial of Service, which is out of the scope of our testing. Brute forcing won't help us here either since we don't know any usernames.

This looks like a dead end. Let's move on.


SSH

Next up is SSH. We'll start with a banner grab:

 nc -nv 10.129.203.101 22

(UNKNOWN) [10.129.203.101] 22 (ssh) open
SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.5

This shows us that the host is running OpenSSH version 8.2, which has no known vulnerabilities at the time of writing. We could try some password brute-forcing, but we don't have a list of valid usernames, so it would be a shot in the dark. It's also doubtful that we'd be able to brute-force the root password. We can try a few combos such as admin:admin, root:toor, admin:Welcome, admin:Pass123 but have no success.

 ssh admin@10.129.203.101

The authenticity of host '10.129.203.101 (10.129.203.101)' can't be established.
ECDSA key fingerprint is SHA256:3I77Le3AqCEUd+1LBAraYTRTF74wwJZJiYcnwfF5yAs.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.129.203.101' (ECDSA) to the list of known hosts.
admin@10.129.203.101's password: 
Permission denied, please try again.

SSH looks like a dead end as well. Let's see what else we have.


Email Services

SMTP is interesting. In a real-world assessment, we could use a website such as MXToolbox or the tool dig to enumerate MX Records.

Let's do another scan against port 25 to look for misconfigurations.

 sudo nmap -sV -sC -p25 10.129.203.101

Starting Nmap 7.92 ( https://nmap.org ) at 2022-06-20 18:55 EDT
Nmap scan report for inlanefreight.local (10.129.203.101)
Host is up (0.11s latency).

PORT   STATE SERVICE VERSION
25/tcp open  smtp    Postfix smtpd
| ssl-cert: Subject: commonName=ubuntu
| Subject Alternative Name: DNS:ubuntu
| Not valid before: 2022-05-30T17:15:40
|_Not valid after:  2032-05-27T17:15:40
|_smtp-commands: ubuntu, PIPELINING, SIZE 10240000, VRFY, ETRN, STARTTLS, ENHANCEDSTATUSCODES, 8BITMIME, DSN, SMTPUTF8, CHUNKING
|_ssl-date: TLS randomness does not represent time
Service Info: Host:  ubuntu

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 5.37 second

Next, we'll check for any misconfigurations related to authentication. We can try to use the VRFY command to enumerate system users.

 telnet 10.129.203.101 25

Trying 10.129.203.101...
Connected to 10.129.203.101.
Escape character is '^]'.
220 ubuntu ESMTP Postfix (Ubuntu)
VRFY root
252 2.0.0 root
VRFY www-data
252 2.0.0 www-data
VRFY randomuser
550 5.1.1 <randomuser>: Recipient address rejected: User unknown in local recipient table

We can see that the VRFY command is not disabled, and we can use this to enumerate valid users. This could potentially be leveraged to gather a list of users we could use to mount a password brute-forcing attack against the FTP and SSH services and perhaps others. Though this is relatively low-risk, it's worth noting down as a Low finding for our report as our clients should reduce their external attack surface as much as possible. If this is no valid business reason for this command to be enabled, then we should advise them to disable it.

We could attempt to enumerate more users with a tool such as smtp-user-enum to drive the point home and potentially find more users. It's typically not worth spending much time brute-forcing authentication for externally-facing services. This could cause a service disruption, so even if we can make a user list, we can try a few weak passwords and move on.

We could repeat this process with the EXPN and RCPT TO commands, but it won't yield anything additional.

The POP3 protocol can also be used for enumerating users depending on how it is set up. We can try to enumerate system users with the USER command again, and if the server replies with +OK, the user exists on the system. This doesn't work for us. Probing port 995, the SSL/TLS port for POP3 doesn't yield anything either.

 telnet 10.129.203.101 110

Trying 10.129.203.101...
Connected to 10.129.203.101.
Escape character is '^]'.
+OK Dovecot (Ubuntu) ready.
user www-data
-ERR [AUTH] Plaintext authentication disallowed on non-secure (SSL/TLS) connections.

We'd want to look further at the client's email implementation in a real-world assessment. If they are using Office 365 or on-prem Exchange, we may be able to mount a password spraying attack that could yield access to email inboxes or potentially the internal network if we can use a valid email password to connect over VPN. We may also come across an Open Relay, which we could possibly abuse for Phishing by sending emails as made-up users or spoofing an email account to make an email look official and attempt to trick employees into entering credentials or executing a payload. Phishing is out of scope for this particular assessment and likely will be for most External Penetration Tests, so this type of vulnerability would be worth confirming and reporting if we come across it, but we should not go further than simple validation without checking with the client first. However, this could be extremely useful on a full-scope red team assessment.

We can check for it anyways but do not find an open relay which is good for our client!

 nmap -p25 -Pn --script smtp-open-relay  10.129.203.101

Starting Nmap 7.92 ( https://nmap.org ) at 2022-06-20 19:14 EDT
Nmap scan report for inlanefreight.local (10.129.203.101)
Host is up (0.12s latency).

PORT   STATE SERVICE
25/tcp open  smtp
|_smtp-open-relay: Server doesn't seem to be an open relay, all tests failed

Nmap done: 1 IP address (1 host up) scanned in 24.30 seconds

Moving On

Port 111 is the rpcbind service which should not be exposed externally, so we could write up a Low finding for Unnecessary Exposed Services or similar. This port can be probed to fingerprint the operating system or potentially gather information about available services. We can try to probe it with the rpcinfo command or Nmap. It works, but we do not get back anything useful. Again, worth noting down so the client is aware of what they are exposing but nothing else we can do with it.

 rpcinfo 10.129.203.101

   program version netid     address                service    owner
    100000    4    tcp6      ::.0.111               portmapper superuser
    100000    3    tcp6      ::.0.111               portmapper superuser
    100000    4    udp6      ::.0.111               portmapper superuser
    100000    3    udp6      ::.0.111               portmapper superuser
    100000    4    tcp       0.0.0.0.0.111          portmapper superuser
    100000    3    tcp       0.0.0.0.0.111          portmapper superuser
    100000    2    tcp       0.0.0.0.0.111          portmapper superuser
    100000    4    udp       0.0.0.0.0.111          portmapper superuser
    100000    3    udp       0.0.0.0.0.111          portmapper superuser
    100000    2    udp       0.0.0.0.0.111          portmapper superuser
    100000    4    local     /run/rpcbind.sock      portmapper superuser
    100000    3    local     /run/rpcbind.sock      portmapper superuser

It's worth consulting this HackTricks guide on Pentesting rpcbind for future awareness regarding this service.

The last port is port 80, which, as we know, is the HTTP service. We know there are likely multiple web applications based on the subdomain and vhost enumeration we performed earlier. So, let's move on to web. We still don't have a foothold or much of anything aside from a handful of medium and low-risk findings. In modern environments, we rarely see externally exploitable services like a vulnerable FTP server or similar that will lead to remote code execution (RCE). Never say never, though. We have seen crazier things, so it is always worth exploring every possibility. Most organizations we face will be most susceptible to attack through their web applications as these often present a vast attack surface, so we'll typically spend most of our time during an External Penetration test enumerating and attacking web applications.

]]>
<![CDATA[Tech_Supp0rt]]>Tech_Supp0rt
Tech_Support

A box of how a scammer’s server got hacked due to some unpatched vulnerabilities.

Nmap scan — identifies the open ports:

22/tcp — ssh(secure shell)

80/tcp — HTTP

139/tcp — Netbios-ssn

445/tcp — SMB(samba share)

nmap scan

SSH seems like

]]>
http://localhost:2368/tech_supp0rt/6506c24ebd32bd40bf71b4e9Sun, 17 Sep 2023 09:10:04 GMTTech_Supp0rt
Tech_Support
Tech_Supp0rt

A box of how a scammer’s server got hacked due to some unpatched vulnerabilities.

Nmap scan — identifies the open ports:

22/tcp — ssh(secure shell)

80/tcp — HTTP

139/tcp — Netbios-ssn

445/tcp — SMB(samba share)

Tech_Supp0rt
nmap scan

SSH seems like a dead end because we lack credentials to access the system. Enumerating port 80 –HTTP displays a default apache web page where we can conclude the OS running is Linux OS

Tech_Supp0rt
Default apache web page

Performing a directory brute force using dirsearch found in https://github.com/maurosoria/dirsearch there were only 2 subdomains;

  • Wordpress
  • Test
Tech_Supp0rt
using dirsearch

Further enumerating wordpress using wpscan to obtain a potential vulnerability seemed like a dead end Enumerating SMB I was able to login with no password and discovered a file called enter.txt which I was able to download and view its contents.

Tech_Supp0rt
Enumerating SMB

The content of enter.txt contains instructions and a username:admin & credentials.

Tech_Supp0rt
Decoding hashed passwords

One thing takes my attention, the subrion site

Tech_Supp0rt

Navigating to the subrion site seems like a dead end but after intercepting with burpsuite and sending the request to the repeater with the path subrion/robots.txt, a path subrion/panel/ discovers a login page which after attempting the credentials under enter.txt we are able to login to the system.

Tech_Supp0rt
subrion login page
Tech_Supp0rt
Subrion page after login

During the enumeration process ,Wappalyzer reveals that the site is running Subrion as a CMS and I also discovered a file upload function in the system.

Tech_Supp0rt
Technologies used

Setting out to look for a specific CVE for the subrion CMS using searchsploit I discovered a file upload vulnerability.

Tech_Supp0rt
searchsploit vulnerabilities for subrion

Download the code and edit changing the IP and paths

Tech_Supp0rt
code.py

Exploiting the CMS gives us a connection a web shell…Hurray!!!!!! We need more than just a web shell!!!!

Tech_Supp0rt

After enumerating the system I was able to discover that the system accepts .phar extension upload

Navigated pentest monkey on Github where I was able to download and upload a php reverse shell on the system.

Tech_Supp0rt

All I had to do was to change the IP to my tun0 and the listening port to the port of my wish then creating a netcat listener where a connection was established after uploading the php reverse shell and navigating to its link.

Tech_Supp0rt

Now we have a proper tty(TeleTYpewriter) shell.

Now all we need to do is to stabilize the shell:

Tech_Supp0rt

Bingo we have our flag; Running ‘sudo-l’ allows us to obtain commands that can be run as root by the current user and navigating to https://gtfobins.github.io/gtfobins/iconv/ I was able to view the Flag

]]>
<![CDATA[Deploying a multi-container application to Azure Kubernetes Services]]>
Azure Kubernetes Service (AKS) is the quickest way to use Kubernetes on Azure. Azure Kubernetes Service (AKS) manages your hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading,

]]>
http://localhost:2368/deploying-a-multi-container-application-to-azure-kubernetes-services/6506c0f3bd32bd40bf71b4a6Tue, 22 Nov 2022 00:00:00 GMT


Azure Kubernetes Service (AKS) is the quickest way to use Kubernetes on Azure. Azure Kubernetes Service (AKS) manages your hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand, without taking your applications offline. Azure DevOps helps in creating Docker images for faster deployments and reliability using the continuous build option.

Lab Scenario

This lab uses a Dockerized ASP.NET Core web application - MyHealthClinic (MHC) and is deployed to a Kubernetes cluster running on Azure Kubernetes Service (AKS) using Azure DevOps.

mhc-aks.yaml  Load Balancer  Redis Cache
Deploying a multi-container application to Azure Kubernetes Services

Click the Azure DevOps Demo Generator link and follow the instructions in Getting Started page to provision the project to your Azure DevOps.

For this lab the Azure Kubernetes Service template is used which is already selected when you click on the link above. There are some additional extensions required for this lab and can be automatically installed during the process.

Deploying a multi-container application to Azure Kubernetes Services
Deploying a multi-container application to Azure Kubernetes Services
  1. Launch the Azure Cloud Shell from the Azure portal and choose Bash.

Deploy Kubernetes to Azure, using CLI:

i. Get the latest available Kubernetes version in your preferred region into a bash variable. Replace <region> with the region of your choosing, "eastus".

version=$(az aks get-versions -l eastus --query 'orchestrators[-1].orchestratorVersion' -o tsv)

ii. Create a Resource Group

  az group create --name akshandsonlab --location eastus

iii. Create AKS using the latest version available

 az aks create --resource-group akshandsonlab --name akspwnresist --enable-addons monitoring --kubernetes-version $version --generate-ssh-keys --location eastus
Deploying a multi-container application to Azure Kubernetes Services
Deploying a multi-container application to Azure Kubernetes Services

Deploy Azure Container Registry(ACR):

 az acr create --resource-group akshandsonlab --name pwnresistmhc --sku Standard --location eastus
Deploying a multi-container application to Azure Kubernetes Services

Authenticate with Azure Container Registry from Azure Kubernetes Service

az aks update -n akspwnresist -g akshansonlab --attach-acr pwnresistmhc
Deploying a multi-container application to Azure Kubernetes Services

Create Azure SQL server and Database:

 az sql server create -l eastus -g akshandsonlab -n sqlsvrpwnresist -u sqladmin -p P2ssw0rd1234
Deploying a multi-container application to Azure Kubernetes Services

Create a database

 az sql db create -g akshandsonlab -s sqlsvrpwnresist -n mhcdb --service-objective S0
Deploying a multi-container application to Azure Kubernetes Services

The following components - Container Registry, Kubernetes Service, SQL Server along with SQL Database are deployed. Access each of these components individually and make a note of the details which will be used in Exercise 1.

Deploying a multi-container application to Azure Kubernetes Services

Select the mhcdb SQL database and make a note of the Server name.

Click on “Set server Firewall” and enable “Allow Azure services …” option.

Navigate to the resource group, select the created container registry and make a note of the Login server name.

Configure Build pipeline

  1. Navigate to Pipelines –> Pipelines.
  2. Select MyHealth.AKS.Build pipeline and click Edit.
Deploying a multi-container application to Azure Kubernetes Services

In Run services task, select your Azure subscription from Azure subscription dropdown. Click Authorize.

Following the successful authentication, select appropriate values from the dropdown - Azure subscription and Azure Container Registry as shown.

Repeat this for the Build services, Push services and Lock services tasks in the pipeline.

Deploying a multi-container application to Azure Kubernetes Services

applicationsettings.json file contains details of the database connection string used to connect to Azure database which was created in the beginning of this lab.

mhc-aks.yaml  deployments services  pods

Click on the Variables tab.

Update ACR and SQLserver values for Pipeline Variables with the details noted earlier while configuring the environment.

Save the changes.

Configure Build pipeline (YAML) -Optional

We also have a YAML build pipeline if that’s something you’re interested in. To proceed through the YAML pipeline, choose MyHealth.AKS.Build-YAML and click Edit. If you utilize the YAML pipeline, make sure to update the MyHealth.AKS.Release release definition’s artifact link.

  1. Navigate to Pipelines –> Pipelines.

2. Select MyHealth.AKS.Build - YAML pipeline and click Edit.

3. In Run Services task, select settings. Select your Azure subscription from Azure subscription dropdown. Click Authorize.

4. Following the successful authentication, select appropriate values from the dropdown - Azure subscription and Azure Container Registry as shown and click Add.

Repeat this for the Build services, Push services and Lock services tasks in the pipeline.

5. Click on the Variables tab.

6. Update ACR and SQLserver values for Pipeline Variables with the details noted earlier while configuring the environment.

Configure Release pipeline

  1. Navigate to Pipelines | Releases. Select MyHealth.AKS.Release pipeline and click Edit.
Deploying a multi-container application to Azure Kubernetes Services

2. Select Dev stage and click View stage tasks to view the pipeline tasks.

Deploying a multi-container application to Azure Kubernetes Services

3. In the Dev environment, under the DB deployment phase, select Azure Resource Manager from the drop down for Azure Service Connection Type, update the Azure Subscription value from the dropdown for Execute Azure SQL: DacpacTask task.

Deploying a multi-container application to Azure Kubernetes Services

4. In the AKS deployment phase, select Create Deployments & Services in AKS task.

Deploying a multi-container application to Azure Kubernetes Services

Update the Azure Subscription, Resource Group and Kubernetes cluster from the dropdown. Expand the Secrets section and update the parameters for Azure subscription and Azure container registry from the dropdown.

Repeat similar steps for Update image in AKS task.

Create Deployments & Services in AKS will create the deployments and services in AKS as per the configuration specified in mhc-aks.yaml file. The Pod, for the first time will pull up the latest docker image.

Update image in AKS will pull up the appropriate image corresponding to the BuildID from the repository specified, and deploys the docker image to the mhc-front pod running in AKS.

A secret called mysecretkey is created in AKS cluster through Azure DevOps by using command kubectl create secret in the background. This secret will be used for authorization while pulling myhealth.web image from the Azure Container Registry.

5. Select the Variables section under the release definition, update ACR and SQLserver values for Pipeline Variables with the details noted earlier while configuring the environment. Select the Save button.

Deploying a multi-container application to Azure Kubernetes Services

Trigger a Build and deploy application

Let us trigger a build manually and upon completion, an automatic deployment of the application will be triggered. Our application is designed to be deployed in the pod with the load balancer in the front-end and Redis cache in the back-end.

  1. Select MyHealth.AKS.build pipeline. Click on Run pipeline

2. Once the build process starts, select the build job to see the build in progress.

Deploying a multi-container application to Azure Kubernetes Services
Deploying a multi-container application to Azure Kubernetes Services

3. Switch back to the Azure DevOps portal. Select the Releases tab in the Pipelines section and double-click on the latest release. Select In progress link to see the live logs and release summary.

Deploying a multi-container application to Azure Kubernetes Services

Once the release is complete, launch the Azure Cloud Shell and run the below commands to see the pods running in AKS

a). Type az aks get-credentials --resource-group yourResourceGroup --name yourAKSname in the command prompt to get the access credentials for the Kubernetes cluster. Replace the variables yourResourceGroup and yourAKSname with the actual values.

b). kubectl get pods

c). To access the application, run the below command. If you see that External-IP is pending, wait for sometime until an IP is assigned.

kubectl get service mhc-front --watch

Deploying a multi-container application to Azure Kubernetes Services

Copy the External-IP and paste it in the browser and press the Enter button to launch the application.

Deploying a multi-container application to Azure Kubernetes Services

Summary

Azure Kubernetes Service (AKS) reduces the complexity and operational overhead of managing a Kubernetes cluster by offloading much of that responsibility to the Azure. With Azure DevOps and Azure Container Services (AKS), we can build DevOps for dockerized applications by leveraging docker capabilities enabled on Azure DevOps Hosted Agents.

]]>
<![CDATA[Role-Based Access Control]]>

Lab scenario

You have been asked to create a proof of concept showing how Azure users and groups are created. Also, how role-based access control is used to assign roles to groups. Specifically, you need to:

  • Create a Senior Admins group containing the user account of Joseph Price as its
]]>
http://localhost:2368/role-based-access-control/6506c286bd32bd40bf71b4f2Thu, 22 Sep 2022 00:00:00 GMT

Lab scenario

Role-Based Access Control

You have been asked to create a proof of concept showing how Azure users and groups are created. Also, how role-based access control is used to assign roles to groups. Specifically, you need to:

  • Create a Senior Admins group containing the user account of Joseph Price as its member.
  • Create a Junior Admins group containing the user account of Isabel Garcia as its member.
  • Create a Service Desk group containing the user account of Dylan Williams as its member.
  • Assign the Virtual Machine Contributor role to the Service Desk group.
  1. In the Search resources, services, and docs text box at the top of the Azure portal page, type Azure Active Directory and press the Enter key.
  2. On the Overview blade of the Azure Active Directory tenant, in the Manage section, select Users, and then select + New user.
  3. On the New User blade, ensure that the Create user option is selected, and specify the following settings:
  4. Click on the copy icon next to the User name to copy the full user.
  5. Ensure that the Auto-generate password is selected, select the Show password checkbox to identify the automatically generated password. You would need to provide this password, along with the user name to Joseph.
  6. Click Create.
  7. Refresh the Users | All users blade to verify the new user was created in your Azure AD tenant.
Role-Based Access Control
Role-Based Access Control

Use the Azure portal to create a Senior Admins group and add the user account of Joseph Price to the group.

In this task, you will create the Senior Admins group, add the user account of Joseph Price to the group, and configure it as the group owner.

  1. In the Azure portal, navigate back to the blade displaying your Azure Active Directory tenant.
  2. In the Manage section, click Groups, and then select + New group.
  3. On the New Group blade, specify the following settings (leave others with their default values):

Group type — Security

Group Name — Senior Admins

Membership Type — Assigned

  1. Click the No owners selected link, on the Add owners blade, select Joseph Price, and click Select.
  2. Click the No members selected link, on the Add members blade, select Joseph Price, and click Select.
  3. Back on the New Group blade, click Create.
Role-Based Access Control

Create a Junior Admins group containing the user account of Isabel Garcia as its member.

Task 1: Use PowerShell to create a user account for Isabel Garcia.

In this task, you will create a user account for Isabel Garcia by using PowerShell.

  1. Open the Cloud Shell by clicking the first icon in the top right of the Azure Portal. If prompted, select PowerShell and Create storage.

Ensure PowerShell is selected in the drop-down menu in the upper-left corner of the Cloud Shell pane.

Role-Based Access Control
Role-Based Access Control

Use PowerShell to create the Junior Admins group and add the user account of Isabel Garcia to the group.

In this task, you will create the Junior Admins group and add the user account of Isabel Garcia to the group by using PowerShell.

Role-Based Access Control

Create a Service Desk group containing the user account of Dylan Williams as its member.

Task 1: Use Azure CLI to create a user account for Dylan Williams.

In this task, you will create a user account for Dylan Williams.

  1. In the drop-down menu in the upper-left corner of the Cloud Shell pane, select Bash, and, when prompted, click Confirm.
  2. In the Bash session within the Cloud Shell pane, run the following to identify the name of your Azure AD tenant:
Role-Based Access Control

Task 2: Use Azure CLI to create the Service Desk group and add the user account of Dylan to the group.

In this task, you will create the Service Desk group and assign Dylan to the group.

  1. In the same Bash session within the Cloud Shell pane, run the following to create a new security group named Service Desk.
Role-Based Access Control
Role-Based Access Control

Assign the Virtual Machine Contributor role to the Service Desk group.

In this exercise, you will complete the following tasks:

  • Task 1: Create a resource group.
  • Task 2: Assign the Service Desk Virtual Machine Contributor permissions to the resource group.

Task 1: Create a resource group

  1. In the Azure portal, in the Search resources, services, and docs text box at the top of the Azure portal page, type Resource groups and press the Enter key.
  2. On the Resource groups blade, click + Create and specify the following settings:

Subscription name — the name of your Azure subscription

Resource group name — AZ500Lab01

Location — East US

3. Click Review + create and then Create.

4. Back on the Resource groups blade, refresh the page and verify your new resource group appears in the list of resource groups.

Role-Based Access Control

Task 2: Assign the Service Desk Virtual Machine Contributor permissions.

  1. On the Resource groups blade, click the AZ500LAB01 resource group entry.
  2. On the AZ500Lab01 blade, click Access control (IAM) in the middle pane.
  3. On the AZ500Lab01 | Access control (IAM) blade, click + Add and then, in the drop-down menu, click Add role assignment.
  4. On the Add role assignment blade, specify the following settings and click Next after each step:

Role in the search tab — Virtual Machine Contributor

Assign access to (Under Members Pane) — User, group, or service principal

Select (+Select Members) — Service Desk

  1. Click Review + assign twice to create the role assignment.
  2. From the Access control (IAM) blade, select Role assignments.
  3. On the AZ500Lab01 | Access control (IAM) blade, on the Check access tab, in the Search by name or email address text box, type Dylan Williams.
  4. In the list of search results, select the user account of Dylan Williams and, on the Dylan Williams assignments — AZ500Lab01 blade, view the newly created assignment.
  5. Close the Dylan Williams assignments — AZ500Lab01 blade.
  6. Repeat the same last two steps to check access for Joseph Price.
Role-Based Access Control
Role-Based Access Control
Role-Based Access Control
]]>
<![CDATA[Networking in Sec]]>IP Addresses

The command ifconfig on Linux displays an inet — IPV4 address(in decimal notation) & inet6 — IPV6 address(in hexadecimal notation). IP address is essential in communication. (We communicate over layer 3)

2 ^ 32 = 4,294,967,296 → The possible number of IPV4 address we can

]]>
http://localhost:2368/networking-in-sec/6506c1fbbd32bd40bf71b4dfSun, 11 Sep 2022 00:00:00 GMTIP AddressesNetworking in Sec

The command ifconfig on Linux displays an inet — IPV4 address(in decimal notation) & inet6 — IPV6 address(in hexadecimal notation). IP address is essential in communication. (We communicate over layer 3)

2 ^ 32 = 4,294,967,296 → The possible number of IPV4 address we can have

2 ^ 128 = 3.402823669×1⁰³⁸ → The possible number of IPV6 address we can have

We still use IPV4 even after exhausting the possible addresses due to the presence of NAT-Network address Translation which allows assigning of Private IP addresses that passes through the Public IP addresses

Networking in Sec
IPV4 classes

MAC Address

MAC — Media access control /Physical address that allows us to communicate when using switches. It is a layer 2.

They have identifiers, take the first 3 pair of two out of the 6 pair of the mac addrress and paste it https://aruljohn.com/mac.pl to see the vendor(Company).

TCP, UDP, and The three-way-handshake

This is layer 4.

TCP — Transmission Control Protocol ,connection oriented protocol

UDP — User Datagram protocol , connectioneless protocol

TCP works in a three way handshake ;

SYN > SYN ACK > ACK.

Networking in Sec
Wireshark

A way to capture traffic from the internet is using wireshark.

Common Ports & Protocols

Networking in Sec

The OSI Model

  1. Physical layer— data cables, cat6
  2. Data layer — Switching, MAC addresses
  3. Network layer — IP addresses, routing
  4. Transport layer — TCP/UDP
  5. Session layer— Session management
  6. Presentation layer — JPEG, MOV, WMV
  7. Application layer — HTTP, SMTP
]]>