According to figure 8.6, which one of the following is not a reasonable option for deploying

Introduction

Shancang Li, in Securing the Internet of Things, 2017

1.2.2 Network Layer

The network layer connects all things in IoT and allows them to be aware of their surroundings. It is capable of aggregating data from existing IT infrastructures and then transmitted to other layers, such as sensing layer, service layers, etc. The IoT connects a variety of different networks, which may cause a lot of difficulties on network problems, security problems, and communication problems.

The deployment, management, and scheduling of networks are essential for the network layer in IoT. This enables devices to perform tasks collaboratively. In the networking layer, the following issues should be addressed:

Network management technologies including the management for fixed, wireless, mobile networks,

Network energy efficiency,

Requirements of QoS,

Technologies for mining and searching,

Information confidentiality,

Security and privacy.

Among these issues, information confidentiality and human privacy and security are critical because of its deployment, mobility, and complexity. The existing network security technologies can provide a basis for privacy and security protection in IoT, but more works still need to be done. The security requirements in network layer involve:

Overall security requirements, including confidentiality, integrity, privacy protection, authentication, group authentication, keys protection, availability, etc.

Privacy leakage: Since some IoT devices physically located in untrusted places, which cause potential risks for attackers to physically find the privacy information such as user identification, etc.

Communication security: It involves the integrity and confidentiality of signaling in IoT communications.

Overconnected: The overconnected IoT may run risk of losing control of the user. Two security concerns may be caused: (1) DoS attack, the bandwidth required by signaling authentication can cause network congestion and further cause DoS; (2) Keys security, for the overconnected network, the keys operations could cause heavy network resources consumption.

MITM attack: The attacker makes independent connections with the victims and relays messages between them, making them believe that they are talking directly to each other over a private connection, when in fact the attacker controls the entire conversation.

Fake network message: Attackers could create fake signaling to isolate/misoperate the devices from the IoT.

In the network layer, the possible security threats are summarized in Table 1.4 and in Table 1.5 the potential security threats and vulnerabilities are analyzed.

Table 1.4. Security Threats in Network Layer

Security ThreatsDescription
Data breach Information released of secure information to an untrusted environment
Public key and private key It comprises of keys in networks
Malicious code Virus, Trojan, and junk message that can cause software failure
DoS An attempt to make an IoT end-node resource unavailable to its users
Transmission threats Threats in transmission, such as interrupting, blocking, data manipulation, forgery, etc.
Routing attack Attacks on a routing path

Table 1.5. The Security Threats and Vulnerabilities in Network Layer

Privacy LeakageConfidentialityIntegrityDoSPKIMITMRequest Forgery
Physical protection
Transmission security
Overconnected
Cross-layer fusion

The network infrastructure and protocols developed for IoT are different with existing IP network, special efforts are needed on following security concerns: (1) Authentication/Authorization, which involves vulnerabilities such as password, access control, etc. and (2) Secure transport encryption—it is crucial to encrypt the transmission in this layer.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128044582000019

Working with the Business Process Management (BPM) Life Cycle

Mark von Rosing, ... Anette Falk Bøgebjerg, in The Complete Business Process Handbook, 2015

Phase 4: Deploy/Implement—Go Live

The 4th phase, the Process Deployment and Implementation phase (see Figure 10), is the phase in which the organization launches, implements, executes, deploys, activates, completes, concludes, and transitions the processes to execution (go live). The Process Release and Deployment Management in the BPM Life Cycle aims to plan, schedule, and control the movement of releases to test in live environments. The primary goal of Release and Deployment Management is to ensure that the integrity of the live environment is protected and that the correct components are released on time and without errors.

Release and Deployment Management aims to build, test, and deliver services to the customers specified by process design by deploying releases into operation, and establishing effective use of the service to deliver value to the customer. As illustrated in Figure 11, process implementation involves multiple aspects from coordination with process owners, change management, to process training.

According to figure 8.6, which one of the following is not a reasonable option for deploying

Figure 11. Example of a process rollout diagram (Lego Group, Anette Falk Bøgebjerg, Director).

The purpose of Release and Deployment Management is to:

Define and agree release and deployment plans with customers/stakeholders

Ensure that each release package consists of a set of related assets and service components that are compatible with each other

Ensure that integrity of a release package and its constituent components is maintained throughout the transition activities and recorded accurately in the configuration management system

Ensure that all release and deployment packages can be tracked, installed, tested, verified, and/or uninstalled or backed out, if appropriate

Ensure that change is managed during the release and deployment activities

Record and manage deviations, risks, and issues related to the new or changed service, and take necessary corrective action

Ensure knowledge transfer to enable the customers and users to optimize their use of the service to support their business activities

Ensure that skills and knowledge are transferred to operations and support staff to enable them to effectively and efficiently deliver, support, and maintain the service, according to required warranties and service levels

Plans for release and deployment will be linked into the overall service transition plan. The approach is to ensure an acceptable set of guidelines is in place for the release into production/operation. Release and deployment plans should be authorized as part of the change management process.

The plan should define the:

Scope and content of the release

Risk assessment and risk profile for the release

Customers/users affected by the release

Change advisory board (CAB) members that approved the change request for the release and/or deployment

Team who will be responsible for the release

Delivery and deployment strategy

Resources for the release and deployment

Build and test planning establishes the approach to building, testing, and maintaining the controlled environments prior to production. The activities include:

Developing build plans from the service design package, design specifications, and environment configuration requirements

Establishing the logistics, lead times, and build times to set up the environments

Testing the build and related procedures

Scheduling the build and test activities

Assigning resources, roles, and responsibilities to perform key activities

Preparing build and test environments

Managing test databases and test data

Software license management

Procedures, templates, and guidance should be used to enable the release team to build an integrated release package efficiently and effectively. Procedures and documents will be required for purchasing, distributing, installing, moving, and controlling assets and components that are relevant to acquiring, building, and testing a release.14

Step 21: Decide on Process Implementation (Based on Requirements)

Develop a plan for implementing the processes and the tools in the organization. This plan should describe how to efficiently move from the organization’s current state to the release and deployment state. To develop this plan, you need to follow specific project steps.15 The output of step 21 is consumed by steps 22, 24, 25, and 27.

Typical tasks that are done within this step:

Set or revise goals

Identify risks

Distribute responsibilities and tasks

Decide when to launch processes and tools

Plan training and mentoring

Typical templates that are used:

Process Map and/or Matrix

Service Map and/or Matrix

Stakeholder Map and/or Matrix

Object Map and/or Matrix

Typical BPM CoE roles involved:

Process eXperts

Process Architects

Step 22: Process Rollout

During the rollout phase, all areas of change are tested together in the business environment to generate confidence that everything is ready to “go live.” During this phase, business users and support teams also receive appropriate training concerning the new processes and the associated systems, organization, and infrastructure.16 The process rollout should be meticulously executed by using a step-by-step approach and also categorized into levels of importance, preferably based on criteria such as complexity, time, cost, and urgency as well as with clearly defined steps for when the main, supporting, and management process rollouts should occur, and in what sequence. The output of step 22 is consumed by steps 23 and 28.

Typical tasks that are done within this step:

Process rollout

Ensure end-to-end process rollout and consistency

Bring all processes up to target performance

Business users and process team training

Test process capability and process adjustment

Manage issue management and change-request handling

Implement all the components of the solution

Typical templates that are used:

Process Map and/or Matrix

Service Map and/or Matrix

Object Map and/or Matrix

Application Service Map and/or Matrix

Data Service Map and/or Matrix

Application Rule Map and/or Matrix

Data Rule Map and/or Matrix

Compliance Map and/or Matrix

Typical BPM CoE roles involved:

Process eXperts

Process Architects

Quality Gate 4:

Process rollout

Ensure process quality

Ensure process coverage

Step 23: Add Process Rewards

Process reward recognition is not just a nice thing to do for the organization or its employees. Process reward recognition is a communication tool that reinforces and rewards the most important process outcomes that people create for your organization. When you recognize people effectively, you reinforce, with your chosen means of process reward recognition, the actions and behaviors you most want to see people repeat. Therefore, process rewards should be defined and created to incite employee motivation for successful implementation, and as rewards for achieving process and value goals. The output of step 23 is consumed by steps 22 and 24.

Typical tasks that are done within this step:

Establish criteria for what process performance or process contribution constitutes behavior or actions that are rewarded

All employees must be eligible for the process reward

Implement process rewards into the process performance model

Build organizational motivation for chasing process rewards to elevate process performance

The process reward recognition should occur as close to the performance of the actions as possible, so the recognition reinforces behavior the employer wants to encourage.

Typical templates that are used:

Value Map and/or Matrix

Stakeholder Map

Organizational Chart Map

Performance Map and/or Matrix

Typical BPM CoE roles involved:

Process eXperts

Value eXperts

Step 24: Enable Process Performance Measurements

Process performance measurement is the process of collecting, analyzing, and reporting information regarding the process performance of a group of processes or an individual process. Enabling performance measurements for processes on all measureable levels is an essential behavior of any BPM Life-cycle project and directly links to monitoring, reporting, decision making, as well as process evaluation and audits. The output of step 24 is consumed by steps 23 and 25.

Typical tasks that are done within this step:

Develop measurement metrics for a process performance model

Define and relate Process Performance Indicators (PPIs) for process levels 3–5

Enable Process Performance Reporting and Evaluation

Identify, categorize, and label Strategic, Tactical, and Operational Process Performance Indicators

Associate and categorize processes the strategic, Tactical, and Operational Process Performance Indicators to the relevant performance goals/objectives

Create a Performance Model with decision making and reporting that illustrates the connection and relationship between Strategic, Tactical, and Operational Process Performance Indicators and the business goals and objectives.

Typical templates that are used:

Process Map and/or Matrix

Measurement and Reporting Map and/or Matrix

Performance Map and/or Matrix

Typical BPM CoE roles involved:

Process eXperts

Process Architects

Value eXperts

Value Gate 4a:

Process performance measurements

Performance measurement tools efficiency

Process efficiency evaluation

Process reporting and evaluation

Step 25: Define Performance Indicators Based on Value Drivers

Establishing direct links between performance indicators and value drivers is essential for both process-modeling and value-modeling perspectives. Therefore, because value drivers indicate value-generating mechanisms, it is important to define performance indicators and let them be based on predefined value drivers. This enables process owners to control and measure the flow of value within the processes on both the high-level and detailed process landscape. The output of step 25 is consumed by steps 24 and 26.

Typical tasks that are done within this step:

Define, associate, and relate the Process Performance Indicators based on Value Drivers

Develop value measurements linked to the process performance measurements

Enable Value based Reporting and Evaluation

Create a Value Model with decision making and reporting that illustrates the connection and relationship between performance indicators and value indicators.

Typical templates that are used:

Process Map and/or Matrix

Performance Map and/or Matrix

Value Map and/or Matrix

Typical BPM CoE roles involved:

Value eXperts

Process eXperts

Enterprise Architects

Value Gate 4b:

Process performance indicators establishment

Number of process targets reached

Number of process targets obsolete

Increase/decrease number of process targets

Step 26: Harmonize Terms

The harmonization of process terms across the process landscape has to be continuously evaluated and managed by process owners and teams. Different BPM-oriented organizations and groups today have the tendency to call certain process objects various names, and the same thing goes for the various BPM frameworks, methods, and approaches, such as Six Sigma, Lean, and BPR that use terms in specialized ways. Several business process methodologies have described the use of terms in specific ways. Formal business process languages, like BPML, have semantic definitions that are enforced by the language. Unfortunately, many of these different sources use terms in slightly different ways.17 We have, therefore, provided basis process terminology and definitions in the BPM ontology chapter. However, it will still be necessary for any organization to tailor these terms, gather additionally needed terms, and establish their own documentation for process terms and definitions to be able to harmonize variants across process groups and process areas to achieve process harmonization (i.e., standardization and integration). The output of step 26 is consumed by steps 25 and 27.

Typical tasks that are done within this step:

Identify, assess, and establish process-level commonality across the organization

Identify, assess, and establish process-harmonization opportunities across the organization

Gather existing process terminology or use the BPM Ontology terminology as a basis to identify relevant terms

Agree on process terms relevant for the organization

Ensure process-term harmonization across the organization

Identify, assess, and establish process-standardization opportunities across the organization

Typical templates that are used:

Process Map and/or Matrix

Information Map and/or Matrix

Service Map and/or Matrix

Object Map and/or Matrix

Typical BPM CoE roles involved:

Process eXperts

Process Architects

Step 27: Establish Process Ownership

As process owners are responsible for the management of processes within the organization, the success of the organization’s BPM initiatives depends heavily on implementing good process ownership (see Figure 12). Regardless of the maturity model being applied by an organization, the creation or assignment of process ownership normally occurs one level up from the status quo. However, why is this difficult? Ironically, one of the most neglected areas of process transformation in any kind of change is the definition and assignment of roles and responsibilities. Although there is now a general acknowledgement that people are one of (if not the most) critical success factors in any type of business transformation, most organizations are not very accomplished at implementing “people”-oriented changes.19 In some cases, process owners are current leaders/managers, and in other cases, process owners may be taken from nonleadership positions. Organizational management and structure is an effective tool to use to establish process ownership along with a clear definition of employee requirements and responsibilities. This, at the same time, also incites the need for documenting definitions around process roles, responsibilities, and the who-does-what structure within process-specific teams. The output of step 27 is consumed by step 26.

According to figure 8.6, which one of the following is not a reasonable option for deploying

Figure 12. A table tool that can be used to link process ownership with value maps and performance maps.

Ref. 18.

Typical tasks that are done within this step:

Specify process ownership responsibility and tasks

Select process owners

Implement a process-ownership organization

Appoint key process roles reporting or working with process owner

Develop and implement process-improvement initiatives

Define the process and monitor process performance

Develop and manage policies and procedures related to the process

Ensure process adoption, harmonization, standardization, and integration

Enable process innovation and transformation (link to BPM Change Management and Continuous Improvement)

Typical templates that are used:

Process Map

Information Map

Owner Map and/or Matrix

Typical BPM CoE roles involved:

Process eXperts

Process Architects

In the 4th phase of the BPM Life Cycle, we went through a series of steps to execute a successful Release and Deployment Management plan to take the business processes out of the production environment and go live. In the upcoming Run and Maintain phase (see Figure 13), we focus on management of the running process environment in which we will put a lot of effort into monitoring and governing the entire process landscape of the organization.

According to figure 8.6, which one of the following is not a reasonable option for deploying

Figure 13. The Run/Maintain phase of the BPM Life Cycle.

Ref. 20.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780127999593000148

Security as a service (SecaaS)—An overview

Baden Delamore, Ryan K.L. Ko, in The Cloud Security Ecosystem, 2015

1.4 Motivation for this chapter

SecaaS represents a new model for the deployment, management, and utilization of security services based on cloud computing principles. Although SecaaS is often considered an emerging trend in IT, the literature around it is sparse. This fact, coupled with an expected proliferation of cloud-based services over the coming years, demands that an overview of such a model is timely. Moreover, there exists a growing interest from business’ for the adoption of this model. In fact, a press release in 2013 from Gartner, a research and IT consulting company, anticipated growth over the next 2 years for cloud security services to increase by nearly 40% (Gartner, 2013).

This is exciting times for business, academia, and corporate organizations who wish to protect their intellectual property and data from traditional threats, without major overhead to the day-to-day running of their operations. As we will soon discuss in this chapter there are a myriad of benefits for shifting to cloud-based security services, but with that said, there exists skepticism around the adoption of it.

This chapter aims to demystify cloud-based security services and provide a compelling discussion with regards to traditional on-premise, managed security services (MSS) and cloud-based security models.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128015957000094

The Forensic Laboratory Integrated Management System

David Watson, Andrew Jones, in Digital Forensics Processing and Procedures, 2013

General Information

the Forensic Laboratory has overall responsibility for the Forensic Laboratory IT infrastructure and is responsible for the deployment, management, and support of all mobile devices;

business proposals that may require a resource of mobile devices must be discussed with relevant Forensic Laboratory employees (e.g., IT Manager, Information Security Manager, etc.);

the Forensic Laboratory shall not accept responsibility or liability for any damage or loss of data to any device or machine while in transit or connected to the network;

traffic on the IT network may be monitored by the Forensic Laboratory to secure effective operation and for other lawful purposes.

The Forensic Laboratory may suspend access to the network via a mobile device for any user found in breach of this or any Forensic Laboratory security policy.

Failure to comply is in breach of this policy and shall be considered a serious disciplinary offence.

This policy is issued and maintained by the Information Security Manager in association with the Human Resources Manager, who also provide advice and guidance on its implementation and ensure compliance.

All Forensic Laboratory employees shall comply with this policy.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597497428000042

Tom Laszewski, Prakash Nauduri, in Migrating to the Cloud, 2012

Business Challenges in Database and Application Migration

Typically, businesses do not care which languages are used to develop an application or which database platform is being used by the IT department. They simply care about getting the job done by having access to the right information at the right time and at the lowest cost. However, any change to IT infrastructure, such as the acquisition of new hardware or software, involves a capital expenditure that needs to be approved by the business leadership. Therefore, for IT infrastructure changes, IT departments have to provide a business case depicting the business value in such terms as ROI and Total Cost of Ownership (TCO), along with the potential improvements the business will see, such as quicker turnaround time for generating important reports (from hours to minutes) and reduced batch cycle times. As businesses adopt new technologies and computing architectures, they face newer and different types of problems. As we discussed in Chapter 1, the advent of client/server computing and of Internet computing silos led to inefficiencies in data centers as a majority of data center servers were underutilized and required significant management effort.

With the interest in cloud computing gaining momentum, there will be challenges ahead for businesses that use cloud services as well as businesses that provide cloud services. Any improvement in the usability features of databases and applications, such as the use of a GUI instead of a terminal emulator, may not be considered a big change from a business perspective. However, if a new database or application can prove to the business that it can reduce deployment, management, and productivity costs, it will be very appealing to the business leadership. Major concerns with respect to database and application changes for business leaders include the following:

Cost of the new platform

Cost of migrating to the new platform

Duration of the migration project

Impact on existing users of the applications

Impact on the IT staff (new hires versus retraining the existing staff)

The cost and duration of a migration project usually depends on the number of applications and databases being considered for migration as well as the complexity and migration approach adopted. Similarly, the impact on existing users (e.g., retraining users on the new platform) also depends on the migration approach selected. At a minimum, retraining database administrators to manage Oracle databases as a result of a database migration is essential.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496476000144

Collection

Edward G. Amoroso, in Cyber Attacks, 2011

Security Information and Event Management

The process of aggregating system data from multiple sources for the purposes of protection is referred to in the computer security community as security information and event management (SIEM). Today, SIEM tools can be purchased that allow collection of a diverse set of technologies from different vendors. This typically includes firewalls, intrusion detection systems (IDS), servers, routers, and applications. Just about every commercial enterprise and government agency today includes some sort of SIEM deployment. One could easily imagine this functionality being extended to include collection feeds from mainframes, servers, and PCs (see Figure 8.6).

According to figure 8.6, which one of the following is not a reasonable option for deploying

Figure 8.6. Generic SIEM architecture.

The SIEM system will include translation functions to take proprietary outputs, logs, and alarm streams from the different vendors into a common format. From this common collection format, a set of common functions can thus be performed, including data storage, display, sharing, and analysis. National infrastructure protection must include rational means for interpreting SIEM data from components, if only because many organizations will already have a SIEM system in place for processing their locally collected data. This interpretation of SIEM data from multiple feeds will be complicated by the fact that most existing SIEM deployments in different companies, sectors, and government agencies are mutually compatible. A more critical problem, however, is the reluctance among most security managers to instrument a real-time connection from their SIEM system to a national collection system. A comparable problem is that service providers do not currently feed the output of their consumer customers’ data into a regional SIEM system.

Security managers will be reluctant to link their SIEM system to a national collection system.

In any event, the architecture for a national system of data collection using SIEM functionality is not hard to imagine. Functionally, each SIEM system could be set up to collect, filter, and process locally collected data for what would be considered nationally relevant data for sharing. This filtered data could then be sent encrypted over a network to an aggregation point, which would have its own SIEM functionality. Ultimately, SIEM functions would reside at the national level for processing data from regional and enterprise aggregation points. In this type of architecture, local SIEM systems can be viewed as data sources, much as the firewalls, intrusion detection systems, and the like are viewed in a local SIEM environment (see Figure 8.7).

According to figure 8.6, which one of the following is not a reasonable option for deploying

Figure 8.7. Generic national SIEM architecture.

Local and regional SIEM systems would work as filters to feed only relevant data to a national collection point.

Unfortunately, most local infrastructure managers have not been comfortable with the architecture shown in Figure 8.7 for several reasons. First, there are obviously costs involved in setting up this sort of architecture, and generally these funds have not been made available by government groups. Second, it is possible that embedded SIEM functionality could introduce functional problems in the local environment. It can increase processor utilization on systems with embedded SIEM hooks, and it can clog up network environments, especially gateway choke points, with data that might emanate from the collection probes.

Will a national data collection system put an increased financial burden on private agencies and enterprises?

A much more critical problem with the idea of national SIEM deployment is that most enterprise and government agency security managers will never be comfortable with their sensitive security data leaving local enterprise premises. Certainly, a managed security service provider might be already accepting and processing security data in a remote location, but this is a virtual private arrangement between a business and its supplier. The data is not intended for analysis other than to directly benefit the originating environment. Furthermore, a service level agreement generally dictates the terms of the engagement and can be terminated by the enterprise or agency at any time. No good solutions exist for national SIEM implementation, other than the generally agreed-upon view that national collection leads to better national security, which in turn benefits everyone.

There are still too many unanswered questions about the security of sensitive data leaving private enterprises.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123849175000081

Cloud Logging

Anton Chuvakin, ... Chris Phillips, in Logging and Log Management, 2013

SIEM in the Cloud

This chapter would not be complete without a discussion of SIEM in the cloud. Traditionally, SIEM systems are shrink wrapped. This means you buy a license for the software and depending on the complexity of the software, you spend money on consultants who in-turn spend long periods of time installing and configuring the SIEM. SIEM software companies have realized that companies are unwilling to spend hundreds of thousands of dollars (or more) on SIEM deployments. In fact many of these software companies are scrambling to retro-fit their software to be cloud deployable.

This is where Managed Security Service Providers (MSSPs) have really shined over the years (in fact MSSPs were doing cloud logging way before cloud logging was a phrase). MSSPs are companies which take on the burden of managing network security for another organization. The different models are typically monitoring only, management only or monitoring and management.

Figure 21.4 shows the logical layout for a SIEM cloud.

According to figure 8.6, which one of the following is not a reasonable option for deploying

Figure 21.4. Logical Layout of SIEM Cloud

What is the first thing you notice? It looks very similar to the cloud logging figure we saw in the Cloud Logging section. The main difference is that the SIEM cloud tends to be a little more on the mature side with respect to feature sets. The other obvious difference is that of an application stack in the cloud. This is used to show that SIEM cloud has a robust set of features all accessible via an API or set of APIs. This is key not only to providing services to internal customers and systems (Operations, Security Operations Center (SOC), Network Operations Center (NOC), billing, HR, etc.), but also for external customers. For example, customers will often want to export tickets from the provider to their internal systems. Most providers implement some sort of Web-based API (RESTful, SOAP, etc.). This makes it easy for the customer to write custom application code to obtain their tickets.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978159749635300021X

Maturity and Readiness

Dennis Nils Drogseth, ... Dan Twing, in CMDB Systems, 2015

Processes

The broad list of ITIL processes below can be a daunting way to approach IT maturity and evolutionary assessments in their full granularity:

Financial management for IT services

Service-level management

Availability management

Capacity management

IT service continuity management

Information security management

Change management

Service configuration and asset management

Release and deployment management

Service desk

Application management

IT operations management

Technical management

Event management

Incident management

Problem management

Continual service improvement

Identity management

To simplify making a maturity assessment, Figure 9.4 shows a grouping of process interdependencies as organizational teams might focus on them. This allows for efficiency and readiness to be considered team by team. This simplified approach can also be a useful reference point going forward as you evaluate organizational dynamics across the four stages of maturity:

According to figure 8.6, which one of the following is not a reasonable option for deploying

Figure 9.4. Clustering ITIL processes by team or organizational model can be a helpful shorthand enabling you to assess and map your own maturity levels to the four-phase maturity model in this book. The following list provides a closer look at how they map. It should be mentioned that configuration management is a pervasive presence across virtually all of these groups.

Service Support: Most IT organizations begin with some level of service support, even if it's focused on end-user PC complaints. This category also includes functions such as help desk, customer service, and client support; incident and problem management; knowledge base; and change/request management.

Operations: This area includes disciplines such as real-time, predictive, and historical fault and performance management and job scheduling, business continuity management, and output management.

Development and Service Delivery (DevOps): The very concept of DevOps dictates a fundamental shift toward more of a business focus. The increasing prominence of DevOps is a sign of positive evolution within IT, as is attention to the core foundations for service delivery such as service-level management, service provisioning (for enabling services), and the critical handshake between advances in simulation and automation in development and advances in change and performance management in service support and operations.

Security: Broadly speaking, security includes access control, identity management, and threat management. The latter also includes functions such as intrusion detection, security/event management, and virus protection. Security, which has long been a separate enclave within IT, is evolving to become a more systemic practice, becoming more tightly integrated with other management processes and non-IT domains, such as human resources and accounting. The growing role of analytics and the faster pace of change are also pushing security more and more into mainstream of operations, development, and other IT groups.

IT Financial and Resource Management: This includes procurement, inventory management, software distribution and hardware release management, license management, configuration management, capacity planning, and optimization of the infrastructure, as well as usage insights for costing and portfolio planning. While many IT organizations have some level of asset management, very few have evolved to support an integrated more strategic focus here—something that a CMDB System can help to enable.

Cross domain Service Management: Although there is clearly a move in this direction due to cloud and other factors, few IT organizations actually use this name for their cross domain efforts. The goal is to find a foothold in how your IT organization is evolving to support cross domain requirements and seek out those already in place—if they exist—and to engage with stakeholders in other groups. Even if the CMDB falls under a separate organization, this group more often than not will be closely allied with your efforts to both improve your IT Maturity Levels and enable your CMDB System success.

Cloud is accelerating the need for cross domain service management teams, which are now often associated with planning the move to internal and external cloud resources more efficiently. In other IT environments, this group can be associated with architectural planning and design, or with tool set selection and support, or with change and configuration management and the CMDB itself. Therefore, don't approach the title too literally. Try instead to understand how your IT organization is, or is not, beginning to move in this “cross domain service management” direction in its own unique way.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128012659000093

Understanding XenApp Security

Tariq Bin Azad, in Securing Citrix Presentation Server in the Enterprise, 2008

Introducing Microsoft Security Tools

The first level of the XenApp Security Model deals with the server itself. First and foremost, you need to have your Windows server properly configured and locked down. There are many ways that you can secure the base operating system. Microsoft has many freely available tools that can assist you with the security configuration of your servers and help you to maintain an effective security posture, such as:

Security Configuration and Analysis Tool This is a Microsoft Management Console (MMC) snap-in that allows you to use default or custom configured templates so that you can analyze and configure security settings on a Windows 2003-based computer.

Microsoft Baseline Security Analyzer (MBSA) This tool, shown in Figure 4.4, scans for missing security updates and common security settings that are configured incorrectly. Typically, this tool is used in conjunction with Microsoft Update or Windows Server Update Services.

According to figure 8.6, which one of the following is not a reasonable option for deploying

Figure 4.4. Using the Microsoft Security Baseline Analyzer Tool (MBSA)

Extended Security Update Inventory Tool This tool is used to detect security bulletins not covered by the MBSA and future bulletins that are exceptions to the MBSA.

System Center Configuration Manager This tool provides operating system and application deployment and configuration management. This is the latest version of Systems Management Server (SMS) 2003.

Microsoft Security Assessment Tool (MSAT) This tool, shown in Figure 4.5, is designed to help you assess weaknesses in your information technology (IT) security environment. The tool provides detailed reporting and specific guidance to minimize risks it has identified.

According to figure 8.6, which one of the following is not a reasonable option for deploying

Figure 4.5. Using the Microsoft Security Assessment Tool (MSAT)

Microsoft Update (www.update.microsoft.com) This Microsoft Web site combines the features of Windows Update and Office Update into a single location that enables you to choose automatic or manual delivery and installation of high-priority updates.

Windows Server Update Services (WSUS) This tool provides an automated way for keeping your Windows environment current with the latest updates and patches.

Microsoft Office Update (www.officeupdate.com) This Microsoft Web site scans and updates Microsoft Office products.

IIS Lockdown Tool This tool provides security configuration for Internet Information Servers (IIS) and can be used in conjunction with URLScan to provide multiple layers of protection against attackers.

UrlScan Tool This tool helps prevent potentially harmful HTTP requests from reaching IIS Web servers.

EventCombMT This multithreaded tool will parse event logs from many servers at the same time to assist you with finding specific event entries.

PortQry This tool is a Transmission Control Protocol/Internet Protocol (TCP/IP) connectivity testing utility that can aid you in determining active TCP ports in use on a system.

Malicious Software Removal Tool This tool checks a system for infections by specific, prevalent malicious software, to include Blaster, Sasser, and Mydoom. The tool can also assist in the removal of any discovered infections. Microsoft releases an updated version of this tool every month.

Port Reporter This tool is a service that logs TCP and User Datagram Protocol (UDP) port activity.

Tip

The servers used in our labs for this book were first locked down using the Windows Server 2003 Security guide and the tools listed above available from the Microsoft Web site, www.microsoft.com/security. The National Security Agency (NSA) also provides several documents that are publicly available from their Web site, www.nsa.gov/snac, to assist in the securing of other assets. As in any environment, you should first implement the recommended settings in a test environment before configuring a live production network.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492812000044

Cloud Computing Architecture

Rajkumar Buyya, ... S. Thamarai Selvi, in Mastering Cloud Computing, 2013

4.3.4 Community clouds

Community clouds are distributed systems created by integrating the services of different clouds to address the specific needs of an industry, a community, or a business sector. The National Institute of Standards and Technologies (NIST) [43] characterizes community clouds as follows:

The infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise.

Figure 4.6 provides a general view of the usage scenario of community clouds, together with reference architecture. The users of a specific community cloud fall into a well-identified community, sharing the same concerns or needs; they can be government bodies, industries, or even simple users, but all of them focus on the same issues for their interaction with the cloud. This is a different scenario than public clouds, which serve a multitude of users with different needs. Community clouds are also different from private clouds, where the services are generally delivered within the institution that owns the cloud.

According to figure 8.6, which one of the following is not a reasonable option for deploying

Figure 4.6. A community cloud.

From an architectural point of view, a community cloud is most likely implemented over multiple administrative domains. This means that different organizations such as government bodies, private enterprises, research organizations, and even public virtual infrastructure providers contribute with their resources to build the cloud infrastructure.

Candidate sectors for community clouds are as follows:

Media industry. In the media industry, companies are looking for low-cost, agile, and simple solutions to improve the efficiency of content production. Most media productions involve an extended ecosystem of partners. In particular, the creation of digital content is the outcome of a collaborative process that includes movement of large data, massive compute-intensive rendering tasks, and complex workflow executions. Community clouds can provide a shared environment where services can facilitate business-to-business collaboration and offer the horsepower in terms of aggregate bandwidth, CPU, and storage required to efficiently support media production.

Healthcare industry. In the healthcare industry, there are different scenarios in which community clouds could be of use. In particular, community clouds can provide a global platform on which to share information and knowledge without revealing sensitive data maintained within the private infrastructure. The naturally hybrid deployment model of community clouds can easily support the storing of patient-related data in a private cloud while using the shared infrastructure for noncritical services and automating processes within hospitals.

Energy and other core industries. In these sectors, community clouds can bundle the comprehensive set of solutions that together vertically address management, deployment, and orchestration of services and operations. Since these industries involve different providers, vendors, and organizations, a community cloud can provide the right type of infrastructure to create an open and fair market.

Public sector. Legal and political restrictions in the public sector can limit the adoption of public cloud offerings. Moreover, governmental processes involve several institutions and agencies and are aimed at providing strategic solutions at local, national, and international administrative levels. They involve business-to-administration, citizen-to-administration, and possibly business-to-business processes. Some examples include invoice approval, infrastructure planning, and public hearings. A community cloud can constitute the optimal venue to provide a distributed environment in which to create a communication platform for performing such operations.

Scientific research. Science clouds are an interesting example of community clouds. In this case, the common interest driving different organizations sharing a large distributed infrastructure is scientific computing.

The term community cloud can also identify a more specific type of cloud that arises from concern over the controls of vendors in cloud computing and that aspire to combine the principles of digital ecosystems7 [44] with the case study of cloud computing. A community cloud is formed by harnessing the underutilized resources of user machines [45] and providing an infrastructure in which each can be at the same time a consumer, a producer, or a coordinator of the services offered by the cloud. The benefits of these community clouds are the following:

Openness. By removing the dependency on cloud vendors, community clouds are open systems in which fair competition between different solutions can happen.

Community. Being based on a collective that provides resources and services, the infrastructure turns out to be more scalable because the system can grow simply by expanding its user base.

Graceful failures. Since there is no single provider or vendor in control of the infrastructure, there is no single point of failure.

Convenience and control. Within a community cloud there is no conflict between convenience and control because the cloud is shared and owned by the community, which makes all the decisions through a collective democratic process.

Environmental sustainability. The community cloud is supposed to have a smaller carbon footprint because it harnesses underutilized resources. Moreover, these clouds tend to be more organic by growing and shrinking in a symbiotic relationship to support the demand of the community, which in turn sustains it.

This is an alternative vision of a community cloud, focusing more on the social aspect of the clouds that are formed as an aggregation of resources of community members. The idea of a heterogeneous infrastructure built to serve the needs of a community of people is also reflected in the previous definition, but in that case the attention is focused on the commonality of interests that aggregates the users of the cloud into a community. In both cases, the concept of community is fundamental.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124114548000048

Which of the following is not one of the principal offensive strategy options?

The principal offensive strategy options include all of the following EXCEPT: initiating a market threat and counterattack simultaneously to effect a distraction.

Which of the following would be a good example of diversification?

Answer and Explanation: 1) Which of the following is an example of diversification : The correct answer is e) Market expansion. To diversify, a company will expand to a new market.

Which of the following is not generally something that ought to be considered in evaluating the attractiveness of a diversified company's business makeup?

Which of the following is NOT generally something that ought to be considered in evaluating the attractiveness of a multibusiness (diversified) company's business makeup? outsourcing most of the value chain activities that have to be performed in the target business/industry.

Which one of the following is not a form of corporate strategy?

Solution(By Examveda Team) Competitive advantage is not a recognized element of corporate strategy.