A collection of information arranged for easy access and retrieval may be referred to as a

A collection of information arranged for easy access and retrieval may be referred to as a

A collection of information arranged for easy access and retrieval may be referred to as a

  • Entertainment & Pop Culture
  • Geography & Travel
  • Health & Medicine
  • Lifestyles & Social Issues
  • Literature
  • Philosophy & Religion
  • Politics, Law & Government
  • Science
  • Sports & Recreation
  • Technology
  • Visual Arts
  • World History
  • On This Day in History
  • Quizzes
  • Podcasts
  • Dictionary
  • Biographies
  • Summaries
  • Top Questions
  • Week In Review
  • Infographics
  • Demystified
  • Lists
  • #WTFact
  • Companions
  • Image Galleries
  • Spotlight
  • The Forum
  • One Good Fact
  • Entertainment & Pop Culture
  • Geography & Travel
  • Health & Medicine
  • Lifestyles & Social Issues
  • Literature
  • Philosophy & Religion
  • Politics, Law & Government
  • Science
  • Sports & Recreation
  • Technology
  • Visual Arts
  • World History
  • Britannica Classics
    Check out these retro videos from Encyclopedia Britannica’s archives.
  • Demystified Videos
    In Demystified, Britannica has all the answers to your burning questions.
  • #WTFact Videos
    In #WTFact Britannica shares some of the most bizarre facts we can find.
  • This Time in History
    In these videos, find out what happened this month (or any month!) in history.
  • Britannica Explains
    In these videos, Britannica explains a variety of topics and answers frequently asked questions.
  • Student Portal
    Britannica is the ultimate student resource for key school subjects like history, government, literature, and more.
  • COVID-19 Portal
    While this global health crisis continues to evolve, it can be useful to look to past pandemics to better understand how to respond today.
  • 100 Women
    Britannica celebrates the centennial of the Nineteenth Amendment, highlighting suffragists and history-making politicians.
  • Britannica Beyond
    We’ve created a new place where questions are at the center of learning. Go ahead. Ask. We won’t mind.
  • Saving Earth
    Britannica Presents Earth’s To-Do List for the 21st Century. Learn about the major environmental problems facing our planet and what can be done about them!
  • SpaceNext50
    Britannica presents SpaceNext50, From the race to the Moon to space stewardship, we explore a wide range of subjects that feed our curiosity about space!

What is data classification?

Data classification is the process of organizing data into categories that make it easy to retrieve, sort and store for future use.

A well-planned data classification system makes essential data easy to find and retrieve. This can be of particular importance for risk management, legal discovery and regulatory compliance.

Written procedures and guidelines for data classification policies should define what categories and criteria the organization will use to classify data. They also specify the roles and responsibilities of employees within the organization regarding data stewardship.

Once a data classification scheme is created, security standards should be identified that specify appropriate handling practices for each category. Storage standards that define the data's lifecycle requirements must be addressed, as well.

What is the purpose of data classification?

Systematic classification of data helps organizations manipulate, track and analyze individual pieces of data. Data professionals often have a specific goal when categorizing data. The goal affects the approach they take and classification levels they use.

Some common business goals for these projects include the following:

  • Confidentiality. A classification system safeguards highly sensitive data, such as customers' personally identifiable information (PII), including credit card numbers, Social Security numbers and other vulnerable data types. Establishing a classification system helps an organization focus on confidentiality and security policy requirements, such as user permissions and encryption.
  • Data integrity. A system that focuses on data integrity will require more storage, user permissions and proper channels of access.
  • Data availability. Addressing and ensuring information security and integrity makes it easier to know what data can be shared with specific users.

Why data classification is important

Data classification is an important part of data lifecycle management that specifies which standard category or grouping a data object belongs in. Once sorted, data classification can help ensure an organization adheres to its own data handling guidelines and to local, state and federal compliance regulations, such as the Health Insurance Portability and Accountability Act, or HIPAA. Companies in highly regulated industries often implement data classification processes or workflows to aid in compliance audit and data discovery processes.

Data classification is used to categorize structured data, but it is especially important for getting the most out of unstructured data. Data categorization also helps identify duplicate copies of data. Eliminating redundant data contributes to efficient use of storage and maximizes data security measures.

Common data classification steps

Not all data needs to be classified. In some cases, destroying data is the prudent course of action. Understanding why data needs to be classified is an important part of the process.

Steps involved in developing a comprehensive set of policies to govern data include the following:

  • Gather information. At the start of a data categorization project, organizations must identify and inspect the data that needs to be classified or reclassified. It's important to know where it resides, how valuable it is, how many copies exist and who has access to it.
  • Develop a framework. Data scientists and other stakeholders collaborate to develop a framework within which to organize the data. They assign metadata or other tags to the information. This approach enables machines and software to instantly sort data in different groups and categories. Anything from file type to character units to size of data packets may be used to sort the information into searchable, sortable categories.
  • Apply standards. Companies must ensure their data classification strategy conforms to their internal data protection and handling practices and reflects industry standards and customer expectations. Unauthorized disclosure of sensitive information could be a breach of protocol and, in some countries, a crime. To enforce proper protocols and protect against data breaches, the protected data must be categorized and sorted according to the nature of its sensitivity.
  • Process data. This step requires taking stock of the database and identifying and sorting data according to the established framework.
A collection of information arranged for easy access and retrieval may be referred to as a
See the six steps that apply to most data classification projects.

Types of data classification

Standard data classification categories include the following:

  • Public information. Data in this category is typically maintained by state institutions and subject to disclosure of public data as part of certain laws.
  • Confidential information. This data may have legal restrictions about the way it is handled, or there may be other consequences around the way confidential data is handled.
  • Sensitive information. This data is any information stored or handled by state or other institutions that has authorization requirements and other rules around its use.
  • Personal information. Generally, personal information or PII is protected by law, and must be handled following certain protocols. Sometimes there are gaps between the moral requirements and contemporary legislative protections for their use.

In computer programming, file parsing is a method of splitting data packets into smaller subpackets that are easier to move, manipulate, categorize and sort. Different parsing styles determine how a system incorporates information. For instance, dates are split up by day, month or year, and words may be separated by spaces.

Some standard approaches to data classification using parsing include the following:

  • Manual intervals. With manual intervals, a person goes through the entire data set and enters class breaks by observing where they make the most sense. This is a fine system for smaller data sets, but it may prove problematic for larger collections of information.
  • Defined intervals. Defined intervals specify a number of characters to include in a packet. For example, information might be broken into smaller packets every three units.
  • Equal intervals. Equal intervals divide a data set into a specified number of groups, distributing the amount of data evenly over the groups.
  • Quantiles. Using quantiles involves setting a number of data values allowed per class type.
  • Natural breaks. A program determines where large changes in the data occur on their own and uses those indicators as a way of determining where to break up the data.
  • Geometric intervals. For geometric intervals, the same number of units is allowed per class category.
  • Standard deviation intervals. The standard deviation of a data entry is determined by how much its attributes differ from the norm. There are set number values to show each entry's deviations.
  • Custom ranges. Users create and set custom ranges. They can change them at any point.

Tools used for data classification

Various tools are used in data classification, including databases, business intelligence (BI) software and standard data management systems. Some examples of BI software used for data classification include Databox, Google Data Studio, SAP Lumira and Vise.

More generally, a regular expression is an equation used to quickly pull data that fits a certain category, making it easier to categorize all of the information that falls within those particular parameters.

Benefits of data classification

Using data classification helps organizations maintain the confidentiality, ease of access and integrity of their data.

For unstructured data in particular, data classification lowers the vulnerability of sensitive information. For example, merchants and other businesses that accept major credit cards are expected to comply with the data classification and other standards of the Payment Card Industry's Data Security Standards. PCI DSS is a set of 12 security requirements aimed at safeguarding customer financial information.

Classification also saves companies from paying steep data storage costs. Storing massive amounts of unorganized data is expensive and could be a liability.

General Data Protection Regulation

The European Union's General Data Protection Regulation (GDPR) is a set of international guidelines created to help companies and institutions handle confidential and sensitive data carefully and respectfully. It is made up of seven guiding principles: fairness, limited scope, minimized data, accuracy, storage limitations, rights and integrity. There are steep penalties for not complying with these standards in some countries.

Implementing methodical data classification is a necessity to comply with the many parts of GDPR. It requires organizations to assign specific security control levels to data to prevent unauthorized disclosure. Classifying data helps data security teams identify data that requires anonymization or encryption.

Another aspect of GDPR that requires effective data classification is that it gives individuals the right to access, change and delete their personal data. Data classification lets companies quickly retrieve such data and fulfill a person's specific request.

Examples of data classification

A number of different category lists can be applied to the information in a system. These lists of qualifications are also known as data classification schemes. For example, one way to classify sensitivity categories might include classes such as secret, confidential, business use only and public.

An organization might also use a system that classifies information based on the type of qualities it drills down into. It might look at the type of content information that goes into files, looking for certain characteristics. For example, context-based classification examines applications, users, geographic location and creator info. User classification is based on what an end user chooses to create, edit and review.

Data reclassification

As part of maintaining a process to keep data classification systems as efficient as possible, it is important for an organization to continuously update the classification systems it uses. It must reassign the values, ranges and outputs of these systems to more effectively meet the organization's classification goals.

Data regression vs. data classification algorithms

Both regression and classification algorithms are standard data management styles. When it comes to organizing data, the biggest differences between regression and classification algorithms is the type of expected output.

Systems that produce a single set of potential results within a finite range often find classification algorithms are ideal. When the results of an algorithm are continuous, such as an output of time or length, using a regression algorithm or linear regression algorithm is more efficient.

A collection of information arranged for easy access and retrieval may be referred to as a
See how a regression algorithm works.

Find out more about data governance and how it ensures data is consistent, trustworthy and not misused.

In which type of sampling design is the final choice of respondents left up to the interviewer?

In probability sampling, the units are selected randomly while in quota sampling a non-random method is used—it is usually left up to the interviewer to decide who is sampled.

What are the two basic types of sampling utilized by marketing researchers?

Random, or probability sampling, gives each member of the target population a known and equal probability of selection. Systematic sampling is a modification of random sampling. To arrive at a systematic sample we simply calculate the desired sampling fraction and take every nth case.

What type of research is Justin's team collecting?

What type of research is Justin's team collecting? Justin is part of a group of students who are working with a bakery to collect marketing research. The bakery wants to determine whether a new type of pastry would be successful.

Which of the following is an example of secondary data analysis?

Popular examples of secondary data include: Tax records and social security data. Census data (the U.S. Census Bureau is oft-referenced, as well as our favorite, the U.S. Bureau of Labor Statistics) Electoral statistics.