Investigating findings
When a compliance check fails or a vulnerability is found, use the Investigate feature to find the actual values on the endpoints that caused the issue.
Before you begin
Investigations are only supported:
- With the Tanium Scan Engine
- On the Tanium Client version 7.4 or later
- With Tanium Interact 2.14.106 or later for full functionality (converting Findings filters to Investigations filters is not supported with earlier versions of Interact)
Investigations are NOT supported:
-
In low resource mode.
-
On unmanaged devices
The following RBAC permission is required: Comply Investigation Execute. See User role requirements.
The Investigation feature only becomes available for existing assessments when they are redeployed or run again as part of a set schedule.
For information on the Open Vulnerability and Assessment Language (OVAL), you can refer to oval.mitre.org.
Investigate compliance findings
Compliance checks are defined through benchmarks, which are listed on the Standards page in Comply. Each benchmark contains a set of rules. Each rule contains a set of checks (criteria). When investigating compliance findings, you are examining the expected values for the checks of the selected rule and comparing them to the actual values that were the result of the rule being evaluated on the endpoint.
To investigate compliance findings,
-
Select Findings from the Comply menu.
-
In the Compliance tab, select the Rule or All Findings tabs.
Filter the list if necessary either by Computer Group or by using any of the available filter items. You can also investigate findings from the All Findings tab. -
Select the rule check box and click the Investigate button to open the Benchmark Rule Details window. (You can only investigate one rule at a time.) You can also click the Get more details arrow to open the flyout and click the Investigation button located there.
If you receive a No rule checks message after you click Investigate, that means there are no checks on the endpoints for the selected rule. This may be because the rule is defined for informational purposes only and therefore has no criteria and nothing to evaluate.
Compliance investigations
In the Investigation/Benchmark Rule Details window, there are several sections of information.
-
The Details section has the general data about the rule, including the Benchmark, Description, Rational, and Remediation suggestion.
The table with the list of checks (also called tests) has several columns:
- Status - Indicates whether the endpoints listed for this check Pass, Fail, provided only Informational results, or experienced an Error. Informational means a rule was checked, but the output from the engine is only informational for auditors or administrators and is not a compliance category. There are two kinds of status errors. An OVAL Error indicates there was a problem collecting information or analyzing a check on the endpoint. An Investigation Error is a Tanium error, likely a Tanium Interact TSE-error. See Tanium Interact User Guide: Troubleshooting.
- Test Description - This text is taken from the OVAL content for that check.
- Expected Objects - The type of object that is expected to be checked on the endpoint.
- Expected States - The property of the object being checked on the endpoint.
- Actual Values - The object found on the endpoint and all the associated properties of that object. The property names are in black and the actual values are in green.
- Type - The type of check, Compliance, Inventory, or SCE. The difference between a Compliance check and an Inventory check is a field in the OVAL definition type. For Compliance, this checks if the system meets a predefined state. For Inventory, this checks if the required software installed on the system. The SCE (script check engine) checks items that OVAL cannot and is packaged together with the benchmark.
- Count - The number of endpoints that have the actual value listed for the check.
No Expected Objects may be displayed if the OVAL Test definition for the test that was evaluated did not specify any Object. This can occur when there was an error reported for the particular investigation..
No Expected States may be displayed if the OVAL Test definition for the test that was evaluated did not specify any State (or property) of the Expected Object. Note that States are not a required attribute of a Test. To satisfy the Test, it may be sufficient to simply specify the Object. An example of this would be to check for the existence of a File (no state is needed in this case).
No Matching Objects may be displayed if there was no matching object found on the endpoint that matched the specified Expected Object.
Filter tests list
You can filter tests in the table by Status, Definition Type, or any of the other available grid filters.
Get more details
Click the arrow on a check in the Investigation/Benchmark Rule Details window to see details about that check. From this window, you can click the View Endpoints and View XML buttons.
-
View Endpoints -
Click this button to see all the endpoints that have the actual value for that check. When you click View Endpoints, the server asks live questions (as opposed to Findings where results are cached). Because these are live questions, endpoint count may not be the same as it is in Findings. This may be due to endpoints having gone offline in the interim. Also note that the percentage number under the Endpoints heading will continue to update as endpoints continue to respond. - View XML - Click this button to view the raw OVAL XML for this check. Viewing the XML shows you the details of what the check does. The first section is the test, and the next section is the expected object. The object has properties that are listed here. The next section is the expected state.
- View Script Details - Only SCE checks will have this additional button available. Click View Script Details to see the raw script that was run on the endpoint to find a specific check.
- Check ID - A semicolon separated string that contains the Benchmark Name, Benchmark Version, Profile Name, Profile Version, and Rule ID.
- Test ID - The ID of the test that was evaluated on the endpoint.
- Actual Values - The values that were collected to evaluate the test with the specified Test ID.
Investigate vulnerability findings
Vulnerability checks are defined through a vulnerability source, which is listed on the Standards page. Each source has a list of CVEs, and each CVE has vulnerability definitions and/or patch definitions. Each definition has a set of criteria or checks. When investigating vulnerability findings, you are examining the expected values for checks and comparing them to the actual values that were discovered on the endpoint.
To investigate vulnerability findings,
-
Select Findings from the Comply menu.
-
In the Vulnerability tab, from the Check ID or All Findings tabs, locate a particular CVE using any of the available filters
. -
Select the CVE check box and click the Investigate button to open the CVE Details window. (You can only investigate one CVE at a time.) You can also click the Get more details arrow to open the flyout and click the Investigation button located there.
Vulnerability investigations
In the Investigation/CVE Details window, there are several sections of information.
-
The Details section has the general data about the rule, including the CVE ID, the score and severity.
The table with the list of checks (also called tests) has several columns:
- Status - This indicates whether the endpoints listed for this check were found=True, not found=False, or experienced an Error. Note that it’s the combination of checks that were found and not found that determine the vulnerability. There are two kinds of status errors. An OVAL Error indicates there was a problem collecting information or analyzing a check on the endpoint. An Investigation Error is a Tanium error, likely a Tanium Interact TSE-error. See Tanium Interact User Guide: Troubleshooting.
- Test Description - This text is taken from the OVAL content for that check.
- Expected Objects - The type of object that is expected to be checked on the endpoint.
- Expected States - The property of the object being checked on the endpoint.
- Actual Values - The object found on the endpoint and all the associated properties for that object. The property names are in black and the actual values are in green.
- Type - This indicates the check was found in an OVAL Vulnerability definition or an OVAL Patch definition. Some check are found in both.
- Endpoints - The number of endpoints that have the actual value listed for the check.
No Expected Objects may be displayed if the OVAL Test definition for the test that was evaluated did not specify any Object. This can occur when there was an error reported for the particular investigation.
No Expected States may be displayed if the OVAL Test definition for the test that was evaluated did not specify any State (or property) of the Expected Object. Note that States are not a required attribute of a Test. To satisfy the Test, it may be sufficient to simply specify the Object. An example of this would be to check for the existence of a File (no state is needed in this case).
No Matching Objects may be displayed if there was no matching object found on the endpoint that matched the specified Expected Object.
Filter tests list
You can filter tests in the table by Status, Definition Type, or any of the other available grid filters.
Get more details
Click the arrow on a check in the Investigation/CVE Details window to see details about that check, including the list of vulnerability definitions and/or patch definitions.
By default, only definitions that have the check are shown. If you want to see other definitions that are related to that CVE but do not include that check, you can deselect the Has Test check box and if there are any such definitions, they will be listed.
From this window, you can also click the View Endpoints and View XML buttons.
-
View Endpoints -
Click this button to see all the endpoints that have the actual value for that check. When you click View Endpoints, the server asks live questions (as opposed to Findings where results are cached). Because these are live questions, endpoint count may not be the same as it is in Findings. This may be due to endpoints having gone offline in the interim. Also note that the percentage number under the Endpoints heading will continue to update as endpoints continue to respond. - View XML - Click this button to view the raw OVAL XML for this check. Viewing the XML shows you the details of what the check does.
- Check ID - The CVE ID.
- Test ID - The ID of the test that was evaluated on the endpoint.
- Actual Values - The values that were collected to evaluate the test with the specified Test ID.
Export investigations
You can export investigations to a CSV file by doing the following:
- From the Compliance or Vulnerability Investigation window, click the Download as CSV button.
- Select Download Test Results or Download Test Results with Endpoint List. Downloading results with endpoints takes longer than downloading without that information. When you include endpoints, a live question is asked to obtain endpoint data.
- Enter the following information into the Export window.
Filename-
Use the default filename or enter your own name.- Include Headers-This is selected by default.
- Compression Type - Choose None, Zip (default), or Gzip.
- Columns Selection:
- All columns (including hidden columns)-In addition to the columns available in the Test Values grid, select this option to also include columns from the Details view.
- Visible columns only-Only include items visible in the Test Values grid.
- Custom set of columns-Select this option to choose which columns to include from a provided list.
- Click the Export button. When the CSV file is ready, it is automatically downloaded. A message at the top of the Investigation window lets you know the export file is building.
Click View Status in the message to go to the Reports > Exports tab. The status will display on the Exports tab and then disappear once the file is ready.
Unlike other exports, this file is not saved in the Reports > Exports tab.
Last updated: 9/26/2023 2:04 PM | Feedback