Rotating encryption keys after a security incident is most closely linked to which security concept?
A. Confidentiality
B. Obfuscation
C. Integrity
D. Availability
Explanation
The core security concepts in the CIA triad are Confidentiality, Integrity, and Availability.
Confidentiality is about preventing the unauthorized disclosure of information. It ensures that data is only accessible to those who are authorized to see it.
Encryption is the primary technical control used to enforce confidentiality. It renders data unreadable to anyone who does not possess the correct decryption key.
Rotating encryption keys (changing them) after a security incident is a direct action to restore and maintain confidentiality. If there is any possibility that an attacker gained access to the encryption keys during the incident, rotating them ensures that even if the attacker exfiltrated encrypted data, they can no longer decrypt it with the old keys. This action proactively protects the confidentiality of the data going forward.
Why the Other Options Are Incorrect
B. Obfuscation:
Obfuscation is about making something difficult to understand, but it is not a primary security control like encryption. Key rotation is a concrete cryptographic practice, not merely obfuscation.
C. Integrity:
Integrity ensures that data is accurate and has not been tampered with. While some cryptographic techniques (like digital signatures) protect integrity, the act of rotating a symmetric encryption key is primarily concerned with preventing unauthorized access (confidentiality), not verifying that data is unchanged.
D. Availability:
Availability ensures that systems and data are accessible when needed. Key rotation, if done incorrectly, could potentially impact availability (e.g., if the new keys are not deployed properly). However, its primary purpose and most direct link are to confidentiality, not availability.
Reference
This is a fundamental principle of cryptography and incident response. Standard post-incident procedures, such as those outlined by NIST, often include key rotation as a critical step to mitigate the risk of data exposure and restore confidentiality guarantees.
Splunk SOAR uses what feature to automate security workflows so that analysts can spend more time performing analysis and investigation?
A. Workbooks
B. Analytic Stories
C. Adaptive Actions
D. Playbooks
Explanation
The key phrase in the question is "raises the threat profile of individuals or assets." This is the exact purpose of the Risk Framework.
Risk Framework:
This framework is designed for risk-based alerting. Instead of generating a notable event for every single suspicious activity, it assigns a risk score to the asset (device) or identity (user) involved. When multiple suspicious activities associated with the same asset or identity occur, their risk scores accumulate. A notable event is then triggered when the total risk score for that entity exceeds a certain threshold. This effectively "raises the threat profile" and helps identify entities that are performing an unusual amount of suspicious activities, even if each individual activity is low severity.
Why the Other Options Are Incorrect
A. Threat Intelligence Framework:
This framework is used to import, manage, and match external threat data (e.g., lists of known malicious IPs, file hashes) against internal data. It enriches events with threat context but does not inherently track and score the behavior of internal assets and identities over time.
C. Notable Event Framework:
This framework manages the lifecycle of alerts after they have been created. It provides the workflow for investigation, assignment, and disposition. It does not itself perform the correlation or scoring that leads to the notable event's creation.
D. Asset and Identity Framework:
This is a critical lookup framework. It stores important information about assets (like criticality) and identities (like department). This information is used by the Risk Framework to weight scores (e.g., a suspicious activity by a critical server adds more risk points) and to provide context. However, the Asset and Identity Framework itself does not perform the correlation or risk scoring; it supplies the data for it.
Reference
The Splunk Enterprise Security documentation clearly defines the Risk Framework's role in adaptive response and risk-based alerting. It explains how risk scores are attributed to objects and how these scores accumulate over time to help identify the most threatened entities in your environment, which is a central concept for reducing alert fatigue and focusing on true threats.
As an analyst, tracking unique users is a common occurrence. The Security Operations Center (SOC) manager requested a search with results in a table format to track the cumulative downloads by distinct IP address. Which example calculates the running total of distinct users over time?
A. eventtype="download" | bin_time span=1d | stats values(clientip) as ipa dc(clientip) by _time | streamstats dc(ipa) as "Cumulative total"
B. eventtype="download" | bin_time span=1d | stats values(clientip) as ipa dc(clientip) by _time
C. eventtype="download" | bin_time span=1d | table clientip _time user
D. eventtype="download" | bin_time span=1d | stats values(clientip) as ipa dc(clientip) by user | table _time ipa
Explanation
The requirement is to track the cumulative total of distinct IP addresses over time. This means we don't just want the count for each day; we want a running total that adds each day's new unique IPs to the total from all previous days.
Let's break down why option A is correct:
eventtype="download":
Filters for download events.
bin_time span=1d:
Groups events into daily time buckets.
stats values(clientip) as ipa dc(clientip) by _time:
For each day (_time), this calculates two things:
values(clientip) as ipa:
Creates a multi-value list of all unique IP addresses for that day.
dc(clientip):
Calculates the distinct count of IP addresses for that specific day.
| streamstats dc(ipa) as "Cumulative total": This is the crucial command that creates the running total.
streamstats calculates statistics in a streaming manner, row by row.
dc(ipa) takes the list of unique IPs for each day (ipa) and calculates a distinct count across all the lists seen so far. This automatically de-duplicates IPs that appeared on previous days, giving a true cumulative total of unique IPs over the entire timeframe.
Why the Other Options Are Incorrect
B. This search stops after calculating the daily distinct count.
It provides the count per day but does not create a cumulative/running total across days. An IP that appears on Monday and again on Wednesday would be counted in both daily totals but would not be de-duplicated in a cumulative view.
C. This command simply creates a raw table of events.
It does not perform any aggregation (like counting distinct IPs) and certainly does not calculate a running total.
D. This search is flawed in its structure.
It uses by user in the stats command, which would group results by a user field instead of (or in addition to) time, breaking the time-series analysis. It also lacks the streamstats command needed for the cumulative calculation.
Reference:
The streamstats command is the primary SPL command for generating running totals, moving averages, and other cumulative calculations. Its ability to apply functions like dc() (distinct count) to multi-value fields generated by values() is a powerful feature for this specific use case.
An adversary uses "LoudWiner" to hijack resources for crypto mining. What does this represent in a TTP framework?
A. Procedure
B. Tactic
C. Problem
D. Technique
Explanation
The TTP framework breaks down adversary behavior into three distinct layers:
Tactic:
The high-level goal of the adversary (the "why"). For example, "Execution" or "Resource Development."
Technique:
The method used to achieve the tactical goal (the "how"). For example, "Native API" or "Scripting."
Procedure:
The specific implementation of a technique by a particular adversary or malware family (the "what"). This is the lowest level of detail.
In this scenario:
The Tactic might be "Impact" or "Resource Hijacking."
The Technique might be "Cryptocurrency Mining" or "System Resource Hijacking."
The Procedure is the specific use of the "LoudWiner" malware to carry out that technique.
Therefore, "LoudWiner" represents the specific tool and method—the procedure—used by this particular adversary.
Why the Other Options Are Incorrect
B. Tactic:
A tactic is the adversary's objective. The objective here is crypto mining, but "LoudWiner" is the specific tool used to achieve that objective, not the objective itself.
C. Problem:
This is not a term used in the TTP framework.
D. Technique:
A technique is a general method. While crypto mining is a technique, "LoudWiner" is not the technique itself; it is a specific instance or implementation of that technique. The technique would be the broader category that "LoudWiner" falls under.
Reference
This aligns directly with the MITRE ATT&CK® framework, which is the most common TTP model. In ATT&CK, a Technique (e.g., T1496 - Resource Hijacking) describes the method, while the Procedure is the specific example documented for a group or software (e.g., "LoudWiner malware has been used to mine for cryptocurrency").
Enterprise Security has been configured to generate a Notable Event when a user has quickly authenticated from multiple locations between which travel would be impossible. This would be considered what kind of an anomaly?
A. Access Anomaly
B. Identity Anomaly
C. Endpoint Anomaly
D. Threat Anomaly
Explanation
This scenario describes an "impossible traveler" alert, which is a classic example of an access anomaly. Let's break down the categories:
Access Anomaly:
This type of anomaly focuses on irregularities in how or from where a user accesses systems and resources. The "impossible travel" scenario is a perfect fit because it detects an anomaly based on the physical improbability of a user authenticating from two geographically distant locations within a time frame that makes travel between them impossible. The core of the detection is the anomalous access pattern.
Why the Other Options Are Incorrect
B. Identity Anomaly:
While this event involves a user's identity, an identity anomaly typically refers to issues with the identity itself or its management. Examples include a user account being created with excessive privileges, an account being used outside of its normal lifecycle, or anomalies in authentication logs that suggest credential theft (like a spike in failed logins). The "impossible traveler" is more about the context of the access than the identity's properties.
C. Endpoint Anomaly:
This would involve unusual activity on a specific device (an endpoint), such as unusual process execution, registry modifications, or network connections originating from the host. This alert is based on user authentication events from potentially any device, not the behavior of a single endpoint.
D. Threat Anomaly:
This is a very broad term and not a standard, specific category like the others listed. It could encompass any of the above. However, within the context of Splunk ES's anomaly detection framework, "Access Anomaly" is the precise and correct classification for this specific detection.
Reference
In Splunk Enterprise Security, correlation searches and adaptive response actions are often categorized by the type of risk they represent. The "impossible traveler" use case is consistently documented as an Access-based anomaly because it detects improbable access patterns that deviate from a user's normal behavior, indicating a potentially compromised account.
What feature of Splunk Security Essentials (SSE) allows an analyst to see a listing of current on-boarded data sources in Splunk so they can view content based on available data?
A. Security Data Journey
B. Security Content
C. Data Inventory
D. Data Source Onboarding Guides
Explanation
Splunk Security Essentials (SSE) is designed to help organizations understand and implement Splunk's security capabilities. A key feature for getting started is understanding what data is already available.
Data Inventory:
This is a specific feature within SSE that connects to your Splunk instance and provides a listing of the data sources (sourcetypes) that are already being ingested. It shows you what you have available. More importantly, it then maps this available data to the security use cases (detections, correlations, etc.) that you can immediately enable because the necessary data is already there. This allows an analyst to "view content based on available data."
Why the Other Options Are Incorrect
A. Security Data Journey:
This feature in SSE provides guidance on the value of different data sources and the steps required to onboard them. It's more about planning for future data collection rather than showing a listing of what is currently available.
B. Security Content:
This is a broad category within SSE that includes all the detections, correlations, and investigations. It is the "content" itself, not the feature that lists available data sources to filter that content.
D. Data Source Onboarding Guides:
These are instructional resources within SSE that explain how to bring a specific data source into Splunk. Like the "Security Data Journey," this is focused on the process of adding new data, not on taking an inventory of existing data.
Reference
The Splunk Security Essentials documentation highlights the Data Inventory as a core feature for accelerating security maturity. It explicitly states that the Data Inventory "shows you which data sources you are already ingesting and which use cases you can enable with your current data." This directly matches the description in the question.
An analyst learns that several types of data are being ingested into Splunk and Enterprise Security, and wants to use the metadata SPL command to list them in a search. Which of the following arguments should she use?
A. metadata type=cdn
B. metadata type=sourcetypes
C. metadata type=assets
D. metadata type=hosts
Explanation
The metadata command in Splunk is used to retrieve information about the indexed data itself, rather than the events. The question asks for a command to list the types of data being ingested. In Splunk terminology, the "type" of data is defined by its sourcetype.
Sourcetype:
This is a key piece of metadata that tells Splunk how to parse and format the data (e.g., access_combined for web logs, WinEventLog for Windows events). Using metadata type=sourcetypes will return a list of all sourcypes present in the indexes being searched, along with counts of events and sources for each. This directly answers the question of "what types of data are being ingested."
Why the Other Options Are Incorrect
A. metadata type=cdn:
"cdn" is not a valid argument for the metadata command. The valid types are hosts, sources, sourcetypes.
C. metadata type=assets:
"assets" is not a valid argument for the metadata command. While Splunk ES has an "Assets and Identities" framework, you do not use the metadata command to list them.
D. metadata type=hosts:
This command would return a list of all host values (the names of the systems that sent the data), not the types of data. It tells you where the data came from, not what kind of data it is.
Reference
The official Splunk documentation for the metadata command lists its syntax as:
metadata [type=
Therefore, sourcetypes is the correct argument to list the different categories or formats of data that have been ingested.
Page 1 out of 14 Pages |