Which of the following is the most suitable as a response strategy for malware outbreaks?

Security component fundamentals for assessment

Leighton Johnson, in Security Controls Evaluation, Testing, and Assessment Handbook (Second Edition), 2020

Containment, eradication, and recovery

“Containment is important before an incident overwhelms resources or increases damage. Most incidents require containment, so that is an important consideration early in the course of handling each incident. Containment provides time for developing a tailored remediation strategy. An essential part of containment is decision-making (e.g., shut down a system, disconnect it from a network, or disable certain functions). Such decisions are much easier to make if there are predetermined strategies and procedures for containing the incident. Organizations should define acceptable risks in dealing with incidents and develop strategies accordingly.

Containment strategies vary based on the type of incident. For example, the strategy for containing an email-borne malware infection is quite different from that of a network-based DDoS attack. Organizations should create separate containment strategies for each major incident type, with criteria documented clearly to facilitate decision-making.”15

“After an incident has been contained, eradication may be necessary to eliminate components of the incident, such as deleting malware and disabling breached user accounts, as well as identifying and mitigating all vulnerabilities that were exploited. During eradication, it is important to identify all affected hosts within the organization so that they can be remediated. For some incidents, eradication is either not necessary or is performed during recovery.

In recovery, administrators restore systems to normal operation, confirm that the systems are functioning normally, and (if applicable) remediate vulnerabilities to prevent similar incidents. Recovery may involve such actions as restoring systems from clean backups, rebuilding systems from scratch, replacing compromised files with clean versions, installing patches, changing passwords, and tightening network perimeter security (e.g., firewall rulesets, boundary router access control lists). Higher levels of system logging or network monitoring are often part of the recovery process. Once a resource is successfully attacked, it is often attacked again, or other resources within the organization are attacked in a similar manner.

Eradication and recovery should be done in a phased approach so that remediation steps are prioritized. For large-scale incidents, recovery may take months; the intent of the early phases should be to increase the overall security with relatively quick (days to weeks) high value changes to prevent future incidents. The later phases should focus on longer-term changes (e.g., infrastructure changes) and ongoing work to keep the enterprise as secure as possible.”16

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128184271000112

Map Investigative Workflows

Jason Sachowski, in Implementing Digital Forensic Readiness, 2016

Eradication and Recovery

After a containment strategy has been implemented, work can begin to remove the elements of the incident from where it exists throughout the organization. At this time, it is important that all affected assets and resources have been identified and remediated to ensure that when containment measures are removed, the incident does not come back or propagate further through the organization.

Recovery efforts that follow eradication involve restoring assets and resources to their normal and fully functional state, such as changing passwords, restoring data from backups, or installing patches. Recovery should be completed following the eradication of an incident’s impact from a particular asset or resource, not in parallel. By completing these tasks as part of a phased approach, organizations can focus their priority on removing the threats from their environment as quickly as possible, then focusing on the work to keep the organization as secure as possible for the long term.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128044544000113

Developing an Incident Response Plan

Laura P. Taylor, in FISMA Compliance Handbook, 2013

Containment and eradication

Upon discovery of a security incident, it is very important to prevent it from spreading to other systems and networks. You should always try to contain an incident before you try to eradicate it. Somewhere in your Incident Response Plan you should state that you have a containment strategy and describe what it is based on.

Each type of security incident may require a completely different containment strategy. Various measures that can be used to contain security incidents and eradicate the intruder include:

Blocking all incoming network traffic on border routers

Blocking networks and incoming traffic on firewalls

Blocking particular services (e.g., ftp, telnet) and ports on firewalls

Disconnecting infected systems from the network

Shutting down the infected system

Locking compromised accounts

Changing passwords on compromised systems

Isolating specific network segments

Before you power down an infected system, you should consider the fact that once a system is powered off, the live memory is no longer available for forensic investigations. There are various forensic tools that can analyze live memory and recover data that could be useful to an investigation. In the course of an investigation, a conscientious decision should be made on whether or not you want to proceed with memory forensics.

Before you attempt to remove and overwrite files associated with an incident, decisions need to be made on whether the evidence should be saved for forensic investigation. In fact, if the goal is to catch the perpetrator and turn the evidence over to a prosecuting attorney, system administrators should not even open files that they suspect have been tampered with. Once you open a file, you change the read time on the file which means that you have corrupted the evidence. The incident response team should report the facts to the agency legal counsel and take into consideration their advisement before proceeding with forensics. Usually, it is not worth the time and expense it takes to find out who the perpetrator is. However, before conceding that the intruder will not be caught, the question should at least be asked if evidence preservation is required.

During the containment and eradication process, the CSIRT will need to make various decisions on how to proceed. You may want to stipulate in your Incident Response Plan that the CSIRT will make the following determinations while processing the incident:

Determine which systems, applications, and networks have been affected

Determine whether to inform users that a security incident has occurred

Determine to what extent the affected systems should remain operational

Determine the scope, impact, and damage to the compromised systems

Determine if the prescribed course of action will destroy the evidence

Determine if other ISSOs should be informed of the incident

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124058712000117

Network Security

George Varghese, in Network Algorithmics, 2005

17.5 DETECTING WORMS

It would be remiss to end this chapter without paying some attention to the problem of detecting worms. A worm (such as Code Red, Nimda, Slammer) begins with an exploit sent by an attacker to take over a machine. The exploit is typically a buffer overflow attack, which is caused by sending a packet (or packets) containing a field that has more data than can be handled by the buffer allocated by the receiver for the field. If the receiver implementation is careless, the extra data beyond the allocated buffer size can overwrite key machine parameters, such as the return address on the stack.

Thus with some effort, a buffer overflow can allow the attacking machine to run code on the attacked machine. The new code then picks several random IP addresses2 and sends similar packets to these new victims. Even if only a small fraction of IP addresses respond to these attacks, the worm spreads rapidly.

Current worm detection technology is both retroactive (i.e., only after a new worm is first detected and analyzed by a human, a process that can take days, can the containment process be initiated) and manual (i.e., requires human intervention to identify the signature of a new worm). Such technology is exemplified by Code Red and Slammer, which took days of human effort to identify, following which containment strategies were applied in the form of turning off ports, applying patches, and doing signature-based filtering in routers and intrusion detection systems.

There are difficulties with these current technologies.

1.

Slow Response: There is a proverb that talks about locking the stable door after the horse has escaped. Current technologies fit this paradigm because by the time the worm containment strategies are initiated, the worm has already infected much of the network.

2.

Constant Effort: Every new worm requires a major amount of human work to identify, post advisories, and finally take action to contain the worm. Unfortunately, all evidence seems to indicate that there is no shortage of new exploits. And worse, simple binary rewriting and other modifications of existing attacks can get around simple signature-based blocking (as in Snort).

Thus there is a pressing need for a new worm detection and containment strategy that is real time (and hence can contain the worm before it can infect a significant fraction of the network) and is able to deal with new worms with a minimum of human intervention (some human intervention is probably unavoidable to at least catalog detected worms, do forensics, and fine-tune automatic mechanisms). In particular, the detection system should be content agnostic. The detection system should not rely on external, manually supplied input of worm signatures. Instead, the system should automatically extract worm signatures, even for new worms that may arise in the future.

Can network algorithmics speak to this problem? We believe it can. First, we observe that the only way to detect new worms and old worms with the same mechanism is to abstract the basic properties of worms.

As a first approximation, define a worm to have the following abstract features, which are indeed discernible in all the worms we know, even ones with such varying features as Code Red (massive payload, uses TCP, and attacks on the well-known HTTP port) and MS SQL Slammer (minimal payload, uses UDP, and attacks on the lesser-known MS SQL port).

1.

Large Volume of Identical Traffic: These worms have the property that at least at an intermediate stage (after an initial priming period but before full infection), the volume of traffic (aggregated across all sources and destinations) carrying the worm is a significant fraction of the network bandwidth.

2.

Rising Infection Levels: The number of infected sources participating in the attack steadily increases.

3.

Random Probing: An infected source spreads infection by attempting to communicate to random IP addresses at a fixed port to probe for vulnerable services.

Note that detecting all three of these features may be crucial to avoid false positives. For example, a popular mailing list or a flash crowd could have the first feature but not the third.

An algorithmics approach for worm detection would naturally lead to the following detection strategy, which automatically detects each of these abstract features with low memory and small amounts of processing, works with asymmetric flows, and does not use active probing. The high-level mechanisms3 are:

1.

Identify Large Flows in Real Time with Small Amounts of Memory: In Section 16.4 we showed how to describe mechanisms to identify flows with large traffic volumes for any definition of a flow (e.g., sources, destinations). A simple twist on this definition is to realize that the content of a packet (or, more efficiently, a hash of the content) can be a valid flow identifier, which by prior work can identify in real time (and with low memory) a high volume of repeated content. An even more specific idea (which distinguishes worms from valid traffic such as peer-to-peer) is to compute a hash based on the content as well as the destination port (which remains invariant for a worm).

2.

Count the Number of Sources: In Section 16.5 we described mechanisms using simple bitmaps of small size to estimate the number of sources on a link using small amounts of memory and processing. These mechanisms can easily be used to count sources corresponding to high traffic volumes identified by the previous mechanism.

3.

Determine Random Probing by Counting the Number of Connection Attempts to Unused Portions of the IP Address: One could keep a simple compact representation of portions of the IP address space known to be unused. One example is the so-called Bogon list, which lists unused 8-bit prefixes (can be stored as a bitmap of size 256). A second example is a secret space of IP addresses (can be stored as single prefix) known to an ISP to be unused. A third is a set of unused 32-bit addresses (can be stored as a Bloom filter).

Of course, worm authors could defeat this detection scheme by violating any of these assumptions. For example, a worm author could defeat Assumption 1 by using a very slow infection rate and by mutating content frequently. Assumption 3 could be defeated using addresses known to be used. For each such attack there are possible countermeasures. More importantly, the scheme described seems certain to detect at least all existing worms we know of, though they differ greatly in their semantics. In initial experiments at UCSD as part of what we call the EarlyBird system, we also found very few false positives where the detection mechanisms complained about innocuous traffic.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780120884773500217

The COVID-19 outbreak: social media sentiment analysis of public reactions with a multidimensional perspective

Basant Agarwal, ... Ashish Sharma, in Cyber-Physical Systems, 2022

7.1 Introduction

The coronavirus disease (2019-nCoV) pandemic is an unprecedented crisis that has affected almost every individual in view of poverty, sustainability, and development all over the globe. Therefore, it becomes even more important to analyze such an effect on the development of people’s lives (Cruz & Ahmed, 2018). After the spread of the SARS-COV-2 epidemic out of China, evolution in the coronavirus disease (2019-nCoV) pandemic shows dramatic differences among countries across the globe(Bhatnagar et al., 2020; Singh et al., 2020; Sohrabi et al., 2020). The panic of the COVID-19 outbreak has traversed around the globe that significantly impacted the global economy and the lifestyle of people all over the world (Qiu, Chen, Shi, & 2020, 2020). However, most of the countries adopted aggressive containment strategies of “lockdown” to mitigate the spread of the highly infectious COVID-19 disease (Chawla, Mittal, Chawla, & Goyal, 2020). The increase in the number of cases during the lockdown period has also spread the disruption, worry, stress, fear, disgust, sadness, and most importantly loneliness among the public at large (Zaroncostas, 2020).

On March 25, India, the world’s second-most populous country with 1.3 billion citizens, witnessed the exhaustive confinement experiment in its history, in an endeavor to combat the COVID-19 disease (Roy et al., 2020; Sharma & Agarwal, 2020). India faced several challenges to deal with the increasing number of infected cases, mainly due to maintaining social distancing among its densely populated residential areas (Covid Social distancing, 2020). With firm preventive control measures and curtailments put in place by the Government of India in the form of nationwide lockdown, the citizens are experiencing a wide range of psychological and emotional responses such as fear and anxiety (Bao, Sun, Meng, Shi, Lu). In the same order, the Indian government imposed a complete lockdown on March 25, 2020, till April 14, 2020 (i.e., lockdown 1.0), which was extended till May 3, 2020 (lockdown 2.0), and further extended this in terms of lockdown 3.0 till May 17, 2020 (Lockdown in India, 2020). During all these lockdowns, the sentiments and psychological state of the public changed significantly, primarily due to various government policies imposed from time to time (Adam, 2020).

In the era of 24 h availability of various social media platforms and also due to social distancing, sentiment analysis plays a major role to understand public opinion. The Obama administration used sentiment analysis to measure public opinion before the 2012 presidential election. Various studies have shown social media is an important platform for information extraction. Sentiment analysis, also known as opinion mining, refers to the techniques and processes that help organizations or policymakers to retrieve information about how their people are reacting to a particular policy or situation.

With the worldwide spread of the COVID-19 infection and as a result of the lockdown and “work from home”, individual activity on social media platforms such as Facebook, Twitter, and YouTube began to increase. Sentiment analysis provides some insights about various important issues from the perspective of common people. Through sentiment analysis, the decision is based on a significant amount of data rather than simple intuitions. It helps to understand the opinion of people about the situation or policy. We use social media cues to understand the relationship between people’s sentiments and the effectiveness of the countermeasures deployed by the government during different phases of the complete lockdown in India. In a situation of a pandemic in a dynamic and developing country like India, policymakers cannot respond to every issue. Sentiment analysis in social media can help them prioritize the most important challenges (Agarwal & Mittal, 2016). This study aims at providing certain help in quickly identifying the policy priorities by examining and analyzing the sentiments of its people via a popular social media platform (Harjule, Gurjar, Seth, & Thakur, 2020; Zaroncostas, 2020).

Social media has been a major mode of communication among people that creates a large amount of user-generated opinionated textual data (Garrett, 2020). This huge collection of the data is a rich source for understanding the variations in the sentiment of the public toward different policies in India (Bhat et al., 2020). We use one of the most popular social media platforms—Twitter—to gauge the feelings of Indians toward the lockdown and other government policies.

A large amount of opinionated text on social media can provide important insights into understanding the public sentiments. In this chapter, we present the temporal sentiment analysis of Twitter data to understand the effect of lockdown on public perception. The impact of government control policies through various stages of lockdown in India is analyzed, which provides interesting aspects of public emotions. In addition, the analysis done in this study may offer a forward-thinking experience about the pandemic from the focal point of social media. Does this study provide answers to certain questions, such as how is Twitter being utilized to flow fundamental data and updates? What is the subject of conversation among individuals in the hour of corona emergencies? Results of lockdown and how individuals are responding to it? It was observed that initially, the positive sentiments were dominating, which was slowly shifted toward the negative sentiments by the end of the third lockdown.

This book chapter is organized as follows: Section 2 describes the data collection part. In Section 3, the impact of COVID-19 in the whole world is studied by finding out the countries that are most affected by the corona crisis. A choropleth map has been given to depict the same. The top four countries are compared from the number of cases point of view by plotting per capita graphs. Along with this sentiment analysis is done for the top five countries by extracting the tweets from those countries.

Further, in Section 4, data analysis of COVID-19 in India is carried out by understanding its trend through sentiment analysis of the tweets collected from India. This is visualized by making word cloud, scatter plot, and bar graphs. To do an in-depth analysis of COVID-19, the analysis for a particular city is done. The relationship between the number of tweets and days is done by using graphs. The frequency of most used words is also shown graphically and by word cloud. The sentiment analysis for the tweets from particular cities is done by the help of different types of plots such as box plot, scatter plot, polarity, and density curve. In Section 5, some of the most trending hashtags such as #WorkFromHome and #MigrantLabour are analyzed by doing sentiment analysis, word cloud, and by finding the frequency of words graphically. Finally, the conclusion is given in Section 6.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128245576000133

A review of mathematical model-based scenario analysis and interventions for COVID-19

Regina Padmanabhan, ... Mohammed Abdulla Al-Hitmi, in Computer Methods and Programs in Biomedicine, 2021

6 Discussion

This pandemic has taught us how important it is to prevent an epidemic than taking it lightly at the beginning and facing the consequence thereafter. There exist many successful containment stories of outbreaks that happened earlier and did not get global media attention just due to the fact that the outbreak did not lead to a pandemic [11,82,111,117,121]. As human behavior is the key factor that decides whether an outbreak will be contained or lead to a pandemic, it is important to create awareness about the effective containment strategies. As the frequency of pandemics is increasing, observing a world epidemic awareness day or week to campaign the importance of awareness and preparedness among the public may help the generations to come. Awareness generally allows a community to restore their freedom by choosing the solutions for a problem than focusing on the restrictions that limit them. Hence, the successful containment of earlier epidemics should be analyzed further and should be showcased as examples to instill the importance of preparedness and early strategic containment efforts [11,82,111,117,121].

Epidemic modeling involves uniformity assumptions leading to aggregate modeling with uniform compartments. There can be homogenous or heterogeneous interaction of the population in the infected compartment with that of the susceptible compartments. As shown in the previous sections, adding additional compartments to account for social and behavioral aspects address the heterogenous interaction issue to a certain extent [31,129]. Network-based models are also desirable as they allow the analysis of scenarios such as what happens when a node or a link is removed, or to determine removing which node or link in a network allows optimal and cost-effective containment [101].

Existing model-based analysis unanimously suggests that mathematical models are critical tools in facilitating epidemic control and hence, need to be adapted and improved with model parameters that specifically account for social contact, human mobility, economic impact, molecular/genetic aspects of the disease, etc. [38,84]. For instance, the development of a cross-scale model that includes population dynamics, pathogen dynamics in the host, viral shedding, social behavior, and environmental spread is desirable. Studies also highlight that disease transmission rates highly depends on population behavior, and not on population size. Incorporation of time lag involved in testing reporting, information on pathogen load which can be quantified by qRT-PCR (quantified RT-PCR) to discriminate between super spreaders and others are also desirable [31,129]. There can be two kinds of spreaders based on public interaction/individual behavior and based on viral load.

Extensive digital technologies have been utilized in the fight against COVID-19 for case identification, contact tracing, and for various intervention-response evaluations [21]. For instance, in Qatar, along with infrared temperature scanners, a mobile app is used to screen every incoming visitors to the supermarkets, banks, and other public and private organizations. Thus limiting entry to such places only to non-exposed individuals. The existence of completely digitized population data and health data before the pandemic has enabled Qatar to quickly integrate COVID-19 testing and reporting via the public health system and link the test results to each individual’s mobile phone. This facilitates the practical implementation and successful deployment of appropriate public health response against disease spread. Apart from the use of control strategies for deriving active intervention protocols, artificial intelligence, and digital methods are used worldwide in many application such as symptom detectors, X-ray image analysis, AI-based intelligent robot assistance for sanitizing, lifting or transporting infected peoples, lockdown patrol, human activity or interaction detection, hospital triage, blood-sample collection, to name some [40,46,86,105,107]. However, AI-based techniques for deriving effective control measures for mitigating the spread is scarce [96]. In [96], a Q-learning-based model-free closed-loop controller that accounts for cost and hospital saturation constraints related to COVID-19 mitigation is discussed.

Challenges pertaining to mathematical model-based research include translation and implementation of the mathematically motivated decision in practice to curtail the spread of COVID-19 or any future pandemic. It is important to conduct post-pandemic validation of mathematical models to re-validate the reliability scale of the models and to increase the confidence of the public, policymakers, and government on mathematical models. Compared to other areas that rely on the model-based study, such as robotics, aeronautics, drug administration, etc., the main challenge in modeling an emerging pandemic is the limited data and knowledge about the transmission parameters and the time scales of intervention-response curves. Conducting post-pandemic model re-validation and analysis is essential to set protocols for tackling future pandemics. Some of the questions that are yet to be answered are: Why the disease severity is widely different age-wise and countrywide? What is the duration of protection expected from vaccines? Can a vaccine developed for the SARS-COV virus can ward off other virus variants effectively? Is there any difference in vaccine efficacy in different age groups? [38,102,123].

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0169260721003758