Which index is used to store KPI values?
A. itsi_summary_metrics
B. itsi_metrics
C. itsi_service_health
D. itsi_summary
A is the correct answer because the itsi_summary_metrics index is used to store KPI values in ITSI. This index improves the performance of the searches dispatched by ITSI, particularly for very large environments. Every KPI is summarized in both the itsi_summary events index and the itsi_summary_metrics metrics index.
Which of the following is an advantage of an adaptive time threshold?
A. Automatically alerting when KPI value patterns change over time.
B. Automatically adjusting thresholds as normal KPI values change over time.
C. Automatically adjusting to holiday schedules.
D. Automatically predicting future degradation of KPI values over time.
Explanation: An adaptive time threshold in the context of Splunk IT Service Intelligence (ITSI) refers to the capability of dynamically adjusting threshold values for Key Performance Indicators (KPIs) based on historical data trends and patterns. This feature allows thresholds to evolve as the 'normal' behavior of KPIs changes over time, ensuring that alerts remain relevant and reduce the likelihood of false positives or negatives. The advantage of this approach is that it accommodates for natural fluctuations in KPI values that may occur due to changes in business operations, seasonality, or other factors, without requiring manual threshold adjustments. This makes the monitoring system more resilient and responsive to actual conditions, improving the overall effectiveness of IT operations management.
Which of the following describes a way to delete multiple duplicate entities in ITSI?
A. Via c CSV upload.
B. Via the entity lister page.
C. Via a search using the | deleteentity command.
D. All of the above.
D is the correct answer because ITSI provides multiple ways to delete multiple duplicate entities. You can use a CSV upload to overwrite existing entities with new or updated information, or delete them by setting the action field to delete. You can also use the entity lister page to select multiple entities and delete them in bulk. Alternatively, you can use a search command called | deleteentity to delete entities that match certain criteria.
What should be considered when onboarding data into a Splunk index, assuming that ITSI will need to use this data?
A. Use | stats functions in custom fields to prepare the data for KPI calculations.
B. Check if the data could leverage pre-built KPIs from modules, then use the correct TA to onboard the data.
C. Make sure that all fields conform to CIM, then use the corresponding module to import related services.
D. Plan to build as many data models as possible for ITSI to leverage
When onboarding data into a Splunk index, assuming that ITSI will need to use this data,
you should consider the following:
B. Check if the data could leverage pre-built KPIs from modules, then use the correct TA to
onboard the data. This is true because modules are pre-packaged sets of services, KPIs,
and dashboards that are designed for specific types of data sources, such as operating
systems, databases, web servers, and so on. Modules help you quickly set up and monitor
your IT services using best practices and industry standards. To use modules, you need to
install and configure the correct technical add-ons (TAs) that extract and normalize the data
fields required by the modules.
The other options are not things you should consider because:
A. Use | stats functions in custom fields to prepare the data for KPI calculations. This is not
true because using | stats functions in custom fields can cause performance issues and
inaccurate results when calculating KPIs. You should use | stats functions only in base
searches or ad hoc searches, not in custom fields.
C. Make sure that all fields conform to CIM, then use the corresponding module to import
related services. This is not true because not all modules require CIM-compliant data
sources. Some modules have their own data models and field extractions that are specific
to their data sources. You should check the documentation of each module to see what
data requirements and dependencies they have.
D. Plan to build as many data models as possible for ITSI to leverage. This is not true
because building too many data models can cause performance issues and resource
consumption in your Splunk environment. You should only build data models that are
necessary and relevant for your ITSI use cases.
Which of the following items describe ITSI Backup and Restore functionality? (Choose all that apply.)
A. A pre-configured default ITSI backup job is provided that can be modified, but not deleted.
B. ITSI backup is inclusive of KV Store, ITSI Configurations, and index dependencies
C. kvstore_to_json.py can be used in scripts or command line to backup ITSI for full or partial backups.
D. ITSI backups are stored as a collection of JSON formatted files.
Explanation:
ITSI provides a kvstore_to_json.py script that lets you backup/restore ITSI configuration
data, perform bulk service KPI operations, apply time zone offsets for ITSI objects, and
regenerate KPI search schedules.
When you run a backup job, ITSI saves your data to a set of JSON files compressed into a
single ZIP file.
C and D are correct answers because ITSI backup and restore functionality uses kvstore_to_json.py as a command line script or as part of custom scripts to backup ITSI
datafor full or partial backups. ITSI backups are also stored as a collection of JSON
formatted files that contain KV store objects such as services, KPIs, glass tables, etc. A is
not a correct answer because there is no pre-configured default ITSI backup job provided.
You can create your own backup jobs or use the command line script or custom scripts to
backup ITSI data. B is not a correct answer because ITSI backup is not inclusive of index
dependencies. ITSI backup only includes KV store objects and optionally some .conf files.
You need to use other methods to backup index data.
Which of the following is a characteristic of base searches?
A. Search expression, entity splitting rules, and thresholds are configured at the base search level.
B. It is possible to filter to entities assigned to the service for calculating the metrics for the service’s KPIs.
C. The fewer KPIs that share a common base search, the more efficiency a base search provides, and anomaly detection is more efficient.
D. The base search will execute whether or not a KPI needs it.
A base search is a search definition that can be shared across multiple KPIs that use the same data source. Base searches can improve search performance and reduce search load by consolidating multiple similar KPIs. One of the characteristics of base searches is that it is possible to filter to entities assigned to the service for calculating the metrics for the service’s KPIs. This means that you can use entity filtering rules to specify which entities are relevant for each KPI based on the base search results.
Which of the following applies when configuring time policies for KPI thresholds?
A. A person can only configure 24 policies, one for each hour of the day.
B. They are great if you expect normal behavior at 1:00 to be different than normal behavior at 5:00
C. If a person expects a KPI to change significantly through a cycle on a daily basis, don’t use it.
D. It is possible for multiple time policies to overlap.
Explanation: Time policies are user-defined threshold values to be used at different times of the day or week to account for changing KPI workloads. Time policies accommodate
normal variations in usage across your services and improve the accuracy of KPI and
service health scores. For example, if your organization’s peak activity is during the
standard work week, you might create a KPI threshold time policy that accounts for higher
levels of usage during work hours, and lower levels of usage during off-hours and
weekends. The statement that applies when configuring time policies for KPI thresholds is:
B. They are great if you expect normal behavior at 1:00 to be different than normal
behavior at 5:00. This is true because time policies allow you to define different
thresholdvalues for different time blocks, such as AM/PM, work hours/off hours,
weekdays/weekends, and so on. This way, you can account for the expected
variations in your KPI data based on the time of day or week.
The other statements do not apply because:
A. A person can only configure 24 policies, one for each hour of the day. This is
not true because you can configure more than 24 policies using different time
block combinations, such as 3 hour block, 2 hour block, 1 hour block, and so on.
C. If a person expects a KPI to change significantly through a cycle on a daily
basis, don’t use it. This is not true because time policies are designed to handle
KPIs that change significantly through a cycle on a daily basis, such as web traffic
volume or CPU load percent.
D. It is possible for multiple time policies to overlap. This is not true because you
can only have one active time policy at any given time. When you create a new
time policy, the previous time policy is overwritten and cannot be recovered.
Page 2 out of 13 Pages |
Splunk SPLK-3002 Dumps Home |