Gaining organizational effectiveness
The four key organizational governance steps to maximizing the value that is delivered by Integrity Monitor are as follows:
- Develop a dedicated change management process. See Change management.
- Define distinct roles and responsibilities. See RACI chart.
- Track operational maturity. See Operational metrics.
- Validate cross-functional alignment. See Organizational alignment.
Develop a tailored, centralized change management process for integrity monitoring activities, taking into account the new capabilities provided by Tanium.
- Update SLAs for file and registry integrity monitoring activities, from identification to remediation of unapproved changes.
- Identify key resources in the organization to review and approve changes to integrity monitoring requirements to ensure minimal unexpected or unapproved changes.
- Align activities to key resources for Tanium integrity monitoring activities across security, operations, and risk/compliance teams.
Identify maintenance windows for all integrity monitoring changes to optimize integrity monitoring effectiveness.
- Create a Tanium steering group (TSG) for integrity monitoring activities, to expedite reviews and approvals of processes that align with SLAs.
A RACI chart identifies the team or resource who is Responsible, Accountable, Consulted, and Informed, and serves as a guideline to describe the key activities across the security, risk/compliance, and operations teams. Every organization has specific business processes and IT organization demands. The following table represents Tanium’s point of view for how organizations should align functional resources against integrity monitoring. Use the following table as a baseline example.
|Define watchlists||C||C||R/A||I||The risk/compliance team creates watchlists that identify files and registry paths to be monitored, with review and input from the security and operations teams, based on regulatory compliance (such as PCI or Sarbanes-Oxley) and security requirements.|
|Configure and deploy monitors||C||R/A||R/A||I||The risk/compliance and operations teams share the responsibility of creating monitors that define which watchlists are used for each computer group, in consultation with the security team. The operations team deploys the monitors to endpoints.|
|Monitor events in real time||C||C||R/A||I||The risk/compliance team owns real-time monitoring and review of the events that result from creation, deletion, and changes to files or registry paths in the watchlists in deployed monitors. The security and operations teams are consulted, and all three teams review results to make adjustments to watchlists as necessary.|
|Use rules to apply labels||C||C||R/A||-||Labels are automatically applied by rules and, optionally, by integration with IT workflows in ServiceNow Change Management. The risk/compliance team creates rules and labels, with review and input from the operations and security teams.|
|Investigate unexpected events||R/A||-||C||I||The security team performs active investigations of unexpected events in consultation with the risk/compliance team.|
|Remediate unapproved events||C||C||R/A||I||The operations team remediates events determined to be unapproved, in consultation with the security and risk/compliance teams. The executive team is informed of the final remediation.|
Report compliance data
|C||R/A||C||I||The risk/compliance team, with input from the security and operations teams, configures reporting of unexpected events (typically unlabeled events) and, optionally, expected events (typically events with specific labels). Using Connect, events can be sent to an external repository, such as a SIEM or SOAR solution, ServiceNow, or a SQL database.|
Trends boards show aggregated data from events and are reviewed by all teams.
|Routinely review compliance data from reports||-||-||R/A||I||The risk/compliance team is responsible for reporting compliance data from Trends and Connect to the executive team and to auditors. External repositories such as a SIEM or a SQL database provide long-term storage for compliance data for auditing.|
Successful organizations use Tanium across functional silos as a common platform for high-fidelity endpoint data and unified endpoint management. Tanium provides a common data schema that enables security, operations, and risk/compliance teams to assure that they are acting on a common set of facts that are delivered by a unified platform.
In the absence of cross-functional alignment, functional silos often spend time and effort in litigating data quality instead of making decisions to improve integrity monitoring.
Managing an integrity monitoring program successfully includes operationalization of the technology and measuring success through key benchmarking metrics. The four key processes to measure and guide operational maturity of your Tanium Integrity Monitor program are as follows:
|Usage||How and when Tanium Integrity Monitor is used in your organization (for example, whether Integrity Monitor is supplemental for another legacy tool)|
|Automation||How Tanium Integrity Monitor is automated across endpoints, and how well it is leveraged in the automation of other systems|
|Functional Integration||How integrated Tanium Integrity Monitor is, across security, operations, and risk/compliance teams|
|Reporting||How Tanium Integrity Monitor reporting is automated and who the audience of Integrity Monitor reporting is|
In addition to the key integrity monitoring processes, the four key benchmark metrics that align to the operational maturity of the Tanium Integrity Monitor program to achieve maximum value and success are as follows:
|Executive Metrics||Integrity Monitor Server Coverage||Mean Unexpected Change Events per Endpoint||Expected vs Unexpected Change Events|
|Description||Percentage of all servers across the environment that are both monitored by Integrity Monitor and reporting a healthy status within the past 24 hours||Mean number of unlabeled change events per endpoint over time||Number of unlabeled and labeled events over time|
|Instrumentation||Number of endpoints running a server operating system that are monitored by Integrity Monitor and reporting a healthy status divided by the total number of endpoints running a server operating system||All unlabeled change events for each 24-hour period divided by the total number of endpoints monitored by Integrity Monitor||The total number of unlabeled events and labeled events for each 24-hour period; the ratio can be calculated by dividing unlabeled events by the sum of unlabeled events and labeled events.|
|Why this metric matters||If servers are not monitored, then changes on critical infrastructure might not be tracked, and unapproved changes cannot be identified. Ensuring that changes are tracked and validated helps prevent unauthorized modifications, outages, and malware/spyware. Adhering to regulatory compliance standards that include an integrity monitoring process for servers helps avoid costly fees and fines.||If there are a high number of unexpected changes in critical infrastructure and systems, then unapproved changes cannot be remediated in a timely manner, and tighter controls might be needed.||
A high ratio of unexpected changes to expected changes can indicate an immature change control process or a need for tuning watchlists and rules.
Use the following table to determine the maturity level for Tanium Integrity Monitor in your organization.
|Process||Usage||Integrity Monitor is configured and used by exception; some monitors are deployed to test groups or other small deployments||Integrity Monitor is used to audit the effectiveness of legacy tooling; simple watchlists are created and deployed in one or more monitors||Integrity Monitor is used as the default tool for file and registry integrity monitoring only on systems in scope for regulatory compliance; legacy tooling might be used to audit file integrity monitoring effectiveness; multiple watchlists have been created and reused between monitors||Integrity Monitor is used as default tool for file and registry integrity monitoring on all critical endpoints; legacy tooling might be used to audit file integrity monitoring effectiveness; complex watchlists with includes and excludes have been created and reused between monitors||Integrity Monitor is used as default tool for file and registry integrity monitoring on all endpoints; watchlists are aligned to regulatory compliance requirements and surveys from business owners that identify critical and custom applications|
|Rules are not used; review of events is fully manual||Rules are not used; review of events is fully manual||Rules are in use to apply labels to events||Well-tuned rules are in use to apply labels to all events||Well-tuned rules are in use to apply labels to all events; ServiceNow integration is used to label events associated with approved change requests or tasks|
|Functional integration||Functionally siloed||Consult with or take direction from peers in first-level risk/compliance and security teams||Consult with or take direction from peers in first- and second-level risk/compliance and security teams; all data is sent to a SIEM, SOAR, or other data lake or log solution using Connect; default labels are used for events||Consult with or take direction from peers in first- and second-level risk/compliance, security, threat management, and security operations center teams; all data is sent to a SIEM, SOAR, or other data lake or log solution using Connect; custom labels are used for events||Consult with or take direction from peers in first- and second-level risk/compliance, security, threat management, and security operations center teams; only labeled events are sent to a SIEM, SOAR, or other data lake or log solution using Connect; remediation policies are in place in Protect|
|Reporting||Manual||Manual; the Integrity Monitor Trends gallery is imported||Automated; Trends boards specific to your environment are created||Automated; Trends boards specific to your environment are created, and Trends reports are distributed automatically||Automated; Trends boards specific to your environment are created, and Trends reports are distributed automatically|
|Metrics||Integrity Monitor Server Coverage||0–79%||80–88%||89–94%||95–98%||99–100%|
|Mean Unexpected Change Events per Endpoint||Frequent spikes or sharply upward trends in unexpected events||Occasional spikes or upward trends in unexpected events||Minimal spikes in unexpected events||Minimal spikes in unexpected events||No events are unexpected|
|Expected vs Unexpected Change Events||21–100% of events are unexpected||11–20% of events are unexpected||6–10% of events are unexpected||1–5% of events are unexpected||No events are unexpected|
Last updated: 10/14/2020 2:22 PM | Feedback