The Stages of Attack or Penetration Testing

  1.    Network reconnaissance :

This is the first phase involved in penetrating a system. This is the stage during which information is gathered about the target in order to facilitate the attack. It can feature “Active” methods, such as actual social engineering in person, or “Passive” methods using searches of public records and even material from “Shodan”.

  1. Host port scanning and banner grabbing

This phase involves using port scanners to look for all open and closed ports. This is carried out using port scanners such as “Nmap”, “Superscan”, and “Angry IP Scanner”. Packet Sniffers like Ethercap and Wireshark can help capture information traversing a site or network.

  1. Vulnerability identification and Exploitation

Using tools like Metasploit or Sqlmap, this phase looks for any vulnerabilities which can be exploited to establish access to a system or network.  Control can be gained at the level of  the OS, system or network. This can proceed into privilege escalation via the cracking of passwords and Dos or DDos attacks. Vulnerability scanners such as Nessus and Nipper help determine how vulnerable a system is.

  1. Rootkit installation

If possible, the installation of a rootkit is an excellent way to maintain control over a system or network while also avoiding detection. Rootkits can disguise themselves and are difficult to detect. The installation of the Rootkit generally occurs after an attacker has successfully exploited a vulnerability in a system or network. The term “Root” refers to what the administrator or privileged account on Unix-like systems was/is called. Rootkits are able to modify at the level of the “kernel”, and removal of firmware Rootkits is often difficult to impossible.

  1.  Hiding tracks

One of the final phases, “Daisy Chaining” or “Exfiltration” aims to leave as little evidence as possible that the attacker penetrated a system. The more skilled the hacker, the less evidence he will leave. This is a critical phase to avoid being caught and to ensure that any modifications or malware installed stay in place as long as possible. Hiding tracks well closes out the attack and ultimately determines the overall success of the attack.


Source: Author

————————————————————————————–

Contact us at     admin@gostst.com

  

 or call 24/7        (210)-446-4863

The Latest Trends in Malware

         As we move further into the 21st century and witness the major advancements in computational power and the sprawl of web-connected devices, malware writers manage to keep up with trends and write malevolent software to match each step forward. Just like the legitimate players in the tech industries, these shadowy figures innovate and find new vectors for infection and better methods to obscure their wares from the average user and professional alike. It is safe to assume that cybercriminals are doing all they can to become more effective and virulent, and as a result the demand for the security industry’s remedies grows as well. Here is a look at some upcoming trends in malware.
         While some malware aim to impose a ransom or to steal data, others take a more aggressive approach. These “wiper malwares”, such as “Shamoon”, “Black Energy”, and Destover” have the single purpose of destroying systems and the data they contain. This tends to cause a great deal of financial damage to victims, as well as ruining their reputation in many cases. Whether it is all about sabotage or a means to cover the threat actor’s tracks on the way out of a penetrated system, this is an area to watch.
         “Fileless malware” is able to infect local hosts without leaving behind any artifacts on the hard drive. This makes it difficult for traditional antimalware software to detect them, as they tend to rely on virus and malware signatures to determine infection. These attacks almost doubled in 2018.
         Botnets are distributed infections using many host’s computational power to infect others and perform the desired actions of the attacker, such as crypto-mining or DDosing targets. So-called “bot-herders” who control these bots have even managed to create “self-organizing” botnet swarms. Due to the promise of automated wide-spread infection, this is a very enticing method of spreading malware for threat actors everywhere.
         APTs (Advanced Persistent Threats) are typically thought to originate from nation-state actors with a wealth of resources. Due to the sovereignty of nation-states, it is difficult to impossible to do anything with regard to enforcement. They are able to create customized malware of the highest order to carry out their espionage and attacks, and often aim to spy on vast numbers of users and even entire enemy or rival nations. These threats, such as the malware “Sofacy”, will only continue to grow, and have now been observed to evolve their own code.
         Cryptomining, which we have looked at previously, has seen an 83 percent increase in attacks in this last year according to Kaspersky Lab, with over 5 millions infections in the first three quarters of 2018. Examples include “Mass Miner” and “Kitty”.
         Threat actors have picked up development of Card-skimming malware in 2019 according to RiskIQ. These malware steal personal information at POS machines and often involve the physical planting of devices onto things such as ATMs to “skim” credit card details.
         Steganography involves hiding information using methods such as encoding executable information in images, text documents, and other formats that are less traditional. Encoding malware steganographically helps evade recognition by antimalware software. Threat actors will continue to push the limits to hide their toxic software from the user and antimalware alike.
         Source: Threatpost.com
————————————————————————————–
Security Technology of South Texas
Contact us at     admin@gostst.com
 or call 24/7        (210)-446-4863

Virtual Machines and The Cloud

Cloud-based services have grown to monopolize some segments of the tech fields. In many cases, it is simply more economically feasible to go to companies like Amazon and make use of their distributed computational infrastructure than to purchase and run servers on location. Here we will look at some of the options available, and what parts of an enterprise can be virtualized.

Virtualization involves the use of what are called VMs or Virtual Machines. A program such as VMware or VirtualBox allows the real-time simulation of various operating systems from Linux, Mac, and Windows to less well known OS’s such as those used for routers and on Cisco devices. In many cases, companies choose to use virtual machines instead of physical hardware to more cheaply and efficiently operate high-traffic scenarios.

Virtual machines can either be run “bare-metal”, meaning directly on the hardware and below any other operating system, or in the previously mentioned hypervisor programs which are capable of managing multiple virtual machines on one physical machine so long as the computational power and memory exists to do so. These are known respectively as Type 1 and Type 2 hypervisors.

Type 1/bare-metal hypervisors without an underlying OS have the advantage of having no OS or device drivers to contend with for resources and are generally regarded as the most efficient form of hypervisor with the best performance. Some examples are VMware ESXi, Microsoft Hyper-v server and open source KVM. These hypervisors are also highly secure. The kinds of vulnerabilities intrinsic to Type 2 hypervisors are absent from bare-metal solutions due to the removal of the attack surface of Type-2 running on the underlying OS of the physical machine. This provides for the logical isolation of Type-1 hypervisors against attack.

Type-2 hypervisors have an unavoidable latency because all their work must pass through the host’s OS. Any security flaws in the OS (of which Windows in particular has many) could potentially compromise all VMs running above it. Because of this, Type-2 hypervisors are typically not used for data centers, instead being used more on end-user systems and in situations where performance and security are not as great a concern. These hypervisors are often used by developers to test products before release.

Both types use something called “hardware acceleration” to different degrees, though Type-2 hypervisors can fall back on software emulation if the native hardware is not supported on the computer. Hardware acceleration includes Intel Virtualization Technology extensions and AMD extensions for those CPU types.

The appeal of virtual machines is obvious. Whereas in the past it was necessary to have a physical infrastructure of servers to support even relatively basic enterprises, companies now can choose to use a subscription to a service dedicated to hosting this storage and processing power off-site. Virtual machines move this infrastructure into a logical space and reduce attack surface and costs associated with having a sprawling network of machines on-site. Firewalls are often virtualized today, as are the resources responsible for single-sign on for end users and user authentication.


Source: vapour-apps.com

————————————————————————————–

Contact us at     admin@gostst.com

   or call 24/7        (210)-446-4863

PKI (Public Key Infrastructure) and LDAPv2

Public Key Infrastructure (PKI) is the set of standards and methods required to manage the process of generating digital certificates and creating cryptographic methods of communication between parties. PKI allows for information to be securely moved through networks and is used in a vast array of network based activities. Essentially, a public key is provided by some entity or party (held on a server) to be verified against the private key, which is specific to an individual.

Fundamentally, PKI is about cryptography. An example of this type of infrastructure in use is during the process of a cryptomalware ransomware attack. In this case, the threat actor (who has encrypted the files of the victim) holds the private key. He makes available the public key, and upon payment of the ransom, hopefully he releases the private key to the victim for decryption.

PKI is also used in industries such as banking and generally in situations where a password would be insufficient to confirm the identity of the parties involved.

——————————————————————————————————

RFC and Internet Draft

 Internet Drafts (I-D) are essentially technical documents which are published by the IETF. They contain research related to networking and sometimes are intended to end up as an RFC (Request For Comments). This RFC, developed by computer scientists and network experts will then be submitted for peer review. Some RFCs will be adopted by the IETF as standards, though some are purely research or experimental in nature.

LDAPv2

Here we will take a look at one particular standard, LDAPv2. LDAP is an acronym for Lightweight Directory Access Protocol. LDAPv2 was developed as a vendor-neutral protocol for accessing X.500 directory standards, but being as it was developed in 1995, a number of vulnerabilities have emerged over timeAccording to the document, LDAPv2 does not support modern authentication mechanisms” such as as Kerberos V.

One of its core features is its ability to maintain central storage of passwords and usernames, however in the document provided at datatracker.ietf.org/doc/rfc3494, LDAPv2 is being recommended for retirement, due in part to the fact that it fails to “provide any mechanism for data integrity or confidentiality”. The text goes on to talk about LDAPv3 and its support for stronger authentication and confidentiality, thereby more adequately fulfilling the “CIA” model of information security. The author recommends moving LDAPv2 to “Historic” status, meaning that developers should no longer use it and therefore consider it obsolete and vulnerable to exploitation.

——————————————————————————————————

The Pros and Cons of PKI

There is an argument to be made that in some cases, PKI can simply be unnecessary, especially when it is easier to implement two-factor authentication such as OTP (One Time Password) tokens or smart cards. Maintaining a PKI infrastructure can be complicated, time-consuming and expensive, and thus some organizations choose to outsource the job. However, the primary advantage of using PKI through SSH (Secure Shell) is its high degree of security. So long as the private-key is kept secret, a threat actor would not be able to execute a dictionary (brute-force) attack to crack a user’s password.

——————————————————————————————————

To sum up, the Advantages of PKI lie in the fact that it is vastly more secure than a simple password system, as a threat actor must obtain not only the cleartext or hashed password, but also the private key in order to impersonate a user.

The Disadvantages are primarily related to the lack of scalability, especially in larger environments. Furthermore, in some situations, the use of PKI could simply be considered overkill, and two-factor, OTP, or CAC may be the superior option.

Source: datatracker.ietf.org/doc/rfc3494rity.

——————————————————————————————————

 Contact Security Technology of South Texas, Inc.  Today at
210-446-4863