Collecting data from endpoints
A critical step in the incident response process is the collection of data from compromised endpoints for further forensic analysis. Threat Response provides a feature called Live Response that you can use to collect specific information from endpoints to use for forensic analysis, data correlation, and to investigate potentially compromised systems with a customizable and extensible framework.
Live Response collects forensic information from endpoints, and transfers the results to a network location that you specify in a package. The Live Response package contains configuration files that identify the data to collect, and where to copy the data. Specify the data that you want to collect from endpoints, and the network destination to save the collected files.
A destination is a location to save forensic data. The server that receives information from Live Response can be an Amazon S3 Bucket, or a server that communicates over SFTP, SCP, or SMB (Windows only - SMB destinations are not included in Live Response packages for macOS and Linux.) protocols.
For SSH (SFTP/SCP) destinations, a user with write access to the share on the destination is required. Consider modifying the /etc/ssh/sshd_config file on the server to allow only SFTP or SCP access. A best practice is to use Linux SFTP/SCP destinations for SCP/SFTP transfers.
The key exchange algorithms supported by Live Response for SSH destinations include:
- [email protected]
At least one of these algorithms must be supported by the server for SSH (SFTP/SCP) destinations.
For an SMB copy location, the system account is used. SMB shares work with domain joined endpoints. Either the specific endpoint must have write access, or the domain computers group must have write access. Required advanced permissions:
- Create files / write data
- Create folders / append data
- Write attributes
For Amazon S3 Bucket copy locations, ensure that clients are synchronized with a time server. Transfers fail if the client time differs from the server time by more than 15 minutes.
For more information on using Amazon S3 Buckets with Live Response, see How to create an AWS S3 Bucket for use with Live Response (login required).
Do not use SMB transfer destinations when a system has been quarantined by Tanium. Live Response uses domain authentication for transfers. When a system is quarantined it cannot reauthorize with the domain and authentication fails.
- From the Threat Response menu, click Management > Live Response. Click Create > Destination.
- In the General Information section, provide a name and description for the destination.
- Select a destination type. Available destination types are S3, SSH, and SMB. The destination type that you select determines the types of required setting information. Refer to destination types for more information.
- Click Save.
Different types of destinations require different settings.
There is no option for disabling hostkey verification for SSH destinations in Live Response for Threat Response.
For S3 destinations, the following settings are required:
The name of the S3 Bucket. When using an S3 bucket as a destination make sure that clients are synchronized with a time server. Transfers fail if the client time differs from the server time by more than 15 minutes.
|Access Key ID|
An ID that corresponds with a secret access key. For example, AKIAIOSFODNN7EXAMPLE
|Secret Access Key|
A secret key that corresponds with an access key ID. For example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY. Manage your access keys as securely as you do your user name and password.
When you create access keys, you create the access key ID and secret access key as a set. During access key creation, AWS gives you one opportunity to view and download the secret access key part of the access key. If you do not download the key or if you lose the key, you can delete the access key and then create a new one.
|Region||An explicitly defined S3 region.|
|Host||The fully qualified domain name of the host.|
|Port||The port to use for the connection for the destination. The default is 443.|
|Use SSL||SSL encryption is enabled.|
|Force Path Style Types||Forces API calls to use path-style URLs where the bucket name is part of the URL path for accessing buckets.|
|Connection Timeout||The amount of time to attempt to establish a connection.|
|Remote Path||A path on the destination where data is collected.|
For SSH destinations, the following settings are required:
Select SFTP or SCP as the protocol to transfer collected files to the destination.
Private Key or Password. The type of protocol that you select determines whether you are prompted to provide a private key or a password to authenticate with the destination.
The fully qualified domain name of the host.
|Port||The port to use for the SSH connection for the destination. The default is 22.|
|Username||The user name for the connection to the destination.|
|Password or Private Key|| The password for the user name, or a private key to authenticate the connection to the destination. An RSA key must be base64 encoded before you enter it into the private key field. |
In PowerShell you can convert to base64 encoding using the following command:
[Convert]::Tobase64String([System.IO.File]::ReadAllBytes('<filepath>')) | clip
Adding | clip to the end of the command sends the base64 output directly to the clipboard for pasting. If you try to copy and paste from the command line output, it is possible to introduce carriage returns which break the input and produce the error "invalid character" in the Tanium data entry console.
On macOS and Linux you can convert to base64 encoding using the following command:
cat <filepath> | base64 -w 0
Although the normal openSSH format is assumed to be base64, encode the entire key again before uploading.
|Known Hosts||The content of an SSH known hosts file.|
|Connection Timeout||The amount of time to attempt to establish a connection.|
|Remote Path||A path on the destination where data is collected. This path is relative to the home directory of the present user. Absolute paths are not supported.|
The SMB transfer protocol is only supported on Windows operating systems. SMB destinations are not included in Live Response packages for macOS and Linux.
For SMB destinations, the following settings are required:
|Universal Naming Convention|
The UNC path of the destination. For example, \\server\folder
A collection defines the data to collect from an endpoint. The following configurations are provided with Live Response:
- Standard Collection: Use for default data. The standard collection contains file collectors to collect specific files from endpoints. See File Collector Sets for a reference of the file collectors that are contained in each type of collection. The following data is captured by default, and is configurable in the standard collection:
- Extended Collection: Use to collect the same data as the standard collection, plus more file based artifacts, such as the kernel, the Master File Table, USN Journal, event logs, registry hive files, and so on. The extended collection contains file collectors to collect specific files from endpoints. See File Collector Sets for a reference of the file collectors that are contained in each type of collection. The following data is configurable in the extended collection:
- Process details
- Module details
- Driver details
- Shim cache
- Scheduled tasks
- Recent files
- Network connections
- Process handle details
- Autoruns details
- Hosts file
- Standard and Master Boot Record
- Master File Table
- USN Journal, Kernel
- Registry Hives
- User Profiles
- Event Logs
- Prefetch files
- Chrome user data
- Recorder database (if present)
- Index Database (if present)
The option to Collect Recorder Database Snapshot enables you to collect a snapshot of either recorder.db or monitor.db from endpoints. Collect Recorder Database Snapshot creates a snapshot of a recorder database - whether or not it is encrypted - and adds the snapshot to the collection. The snapshot that this module creates is removed from the endpoint when the collection has completed. By default, recorder database snapshots are saved in a folder named RecorderSnapshot on a path that corresponds with the name of the endpoint. For example, <base_directory>\<endpoint_name>\collector\RecorderSnapshot\<database_name>.db.
- Memory Collection: Use for memory acquisition. The memory collection contains file collectors to collect specific files from endpoints. See File Collector Sets for a reference of the file collectors that are contained in each type of collection. Memory data is configurable in the memory collection.
You can create a custom configuration to collect specific data from endpoints.
- From the Threat Response menu, click Management > Live Response. Click Create > Collection.
- In the General Information section, provide a name and description for the collection.
- Select the modules that you want to include in the data collection. A module is a functional area of forensic investigation. For example, the Network Connections module collects data that is helpful to understanding network connections that the endpoint has been involved in. The operating system icons next to each module show the operating systems to which the modules apply.
- Select the File Collector Sets that you want to include in the collection. See File Collectors for more information.
- Under Script Sets, select the script sets that you want to include in the collection. See Script Sets for more information.
- Click Save.
File collector sets to define the types of files that you want to collect from endpoints. For example, you can select all files of a specific type, or files that reside on a specific path. Live Response on Windows collects alternate data streams. The name of the alternate data stream is appended to the regular data stream preceded by an underscore. For example, if an alternate data stream named hidden_datastream exists for a file named hosts, this alternate data stream would be collected as <path>\hosts_hidden_datastream.
When setting a maximum recursive depth, enter -1 to represent unlimited.
|File Collector||Featured in |
|Path||File Pattern||Maximum |
|Hosts File||Windows Hosts File||Standard, Extended||Windows||%systemdrive%\|
|Non-Windows Hosts File|
|Etc Folder Tree||Etc|
|Shell History Files||PowerShell History||Standard, Extended||Windows||%userprofile%\|
|Bourne Again (bash) Shell History||Standard, Extended||Linux, Mac||$HOME||^\.bash_history$||0||1|
|Bourne (sh) Shell History||Standard, Extended||Linux, Mac||$HOME||^\.sh_history$||0||1|
|Bourne Again (bash) Sessions||Standard, Extended||Mac||$HOME/.bash_sessions||.*history.*||15||Unlimited|
|Secure Shell (SSH) Files||User's Known Hosts||Standard, Extended||Linux, Mac||$HOME/.ssh||^known_hosts$||0||1|
|User's Authorized Keys||Standard, Extended||Linux, Mac||$HOME/.ssh||^authorized_keys$||0||1|
|Current SSH Users||Standard, Extended||Linux, Mac||/var/run||^utmp.*||0||1|
|SSH Logon Logoff||Standard, Extended||Linux||/var/log||^wtmp.*||0||Unlimited|
|Failed SSH Logon||Standard, Extended||Linux||/var/log||^btmp.*||0||Unlimited|
|SSH Last Logged On Users||Standard, Extended||Linux||/var/log||^lastlog$||0||Unlimited|
|SSH Daemon Configuration||Standard, Extended||Linux, Mac||/etc/ssh||^sshd_config$||0||Unlimited|
|SSH Client Configuration||Standard, Extended||Linux, Mac||/etc/ssh||^ssh_config$||0||Unlimited|
|Systemd Folder Tree||Systemd||Standard, Exended||Linux||/etc/systemd/system||.*||15||Unlimited|
|Kext Details||Kext Details||Standard, Extended||Mac||/var/db/|
|Kext Details (v11+)||Standard, Extended||Mac||/var/db/|
|Master File Table||Windows Master File Table||Extended||Windows||%systemdrive%||(\$MFT$)||1||1|
|Kernel||Windows Kernel||Exended, Memory||Windows||%systemdrive%\windows\|
|System Registry Hives||Windows System Registry Hives||Extended||Windows||%systemdrive%\windows\|
|User Registry Hives||Windows User Registry Hives||Extended||Windows||%userprofile%\||(^ntuser\.dat$)||2||Unlimited|
|Windows Event Logs||Windows Event Logs||Extended||Windows||%systemdrive%\windows\|
|Windows Prefetch Files||Windows Prefetch Files||Extended||Windows||%systemroot%\prefetch\||(.*\.pf)|(layout\.ini)|(.*\.db)|(pfsvperfstats\.bin)||1||Unlimited|
|Chrome User Data||Windows Chrome User Data - Cache||Extended||Windows||%LOCALAPPDATA%\Google\|
|Windows Chrome User Data - Local Storage||Extended||Windows||%LOCALAPPDATA%\Google\|
|Windows Chrome User Data - Profile||Extended||Windows||%LOCALAPPDATA%\Google\|
|MacOS Chrome Data||Extended||Mac||$HOME/Library/Application Support/Google/Chrome||.*||9||Unlimited|
|Linux Chrome Data||Extended||Linux||$HOME/.config/google-chrome||.*||9||Unlimited|
|Tanium Trace Database||Windows Tanium Trace Database||Extended||Windows||%TANIUMDIR%\||^monitor\.db(\-)*(wal|shm|journal)*$||0||Unlimited|
|Tanium Index Database||Windows Tanium Index Database||Extended||Windows||%TANIUMDIR%\Tools\EPI\||^EndpointIndex\.db(\-)*(wal|shm||
|Shell Configuration Files||Bourne Again (bash) Settings||Extended||Linux, Mac||$HOME||^\.bash(rc|_profile||
|C Shell (csh and tcsh) Settings||Extended||Linux, Mac||$HOME||^\.(tcshrc||
|Available Shells||Available Shells||Extended||Linux, Mac||/etc||^shells$||0||1|
|Passwd and Group Files||Passwd and Group Files||Extended||Linux,Mac||/etc||^(passwd|group)$||0||Unlimited|
|Shadow Files||Shadow Files||Extended||Linux, Mac||/etc||^(shadow|gshadow||
|Sudoers Configuration||Sudoers File||Extended||Linux, Mac||/etc||^sudoers$||0||Unlimited|
|Sudoers.d Folder Contents||Extended||Linux, Mac||/etc/sudoers.d||.*||15||Unlimited|
|Mount Points||Mount Points||Extended||Linux||/etc||^fstab$||0||1|
|NFS Mount Points||Extended||Linux||/etc||^exports.*||0||1|
|Preload Shared Libraries||LD Preload Shared Libraries||Extended||Linux||/etc||ld\.so.*||15||Unlimited|
|LD Preload Shared Libraries Configuration Directory||Extended||Linux||/etc/ld.so.conf.d||.*||0||Unlimited|
|Auditd Configuration and Rules||LD Preload Shared Libraries||Extended||Linux||/etc/audit||.*||15||Unlimited|
|RPM GPG Keys||RPM GPG Keys||Extended||Linux||/etc/pki/rpm-gpg||.*||15||Unlimited|
|SSL/TLS Certificates and PKI||SSL/TLS Certificates Directory||Extended||Linux||/etc/pki/tls||.*||15||Unlimited|
|SSL/TLS Certificate Authority Directory||Extended||Linux||/etc/pki/CA||.*||15||Unlimited|
|User Recently Used/Deleted Files||Recently Used GTK Files||Extended||Linux||$HOME/.local/share||recently-used\|
|Recently Deleted Info||Extended||Linux||$HOME/.local/share/|
|Recently Deleted Files||Extended||Linux||$HOME/.local/share/|
|User Vim Configuration||Vim Info||Extended||Linux, Mac||$HOME||^\.viminfo$||0||Unlimited|
|Non-Windows Vim Configuration||Extended||Linux, Mac||$HOME||^\.vimrc$||0||Unlimited|
|Windows Vim Configuration||Extended||Windows||%homepath%\||^_vimrc$||0||Unlimited|
|User Less History||Less History||Extended||Linux, Mac||$HOME||^\.lesshst$||0||Unlimited|
|User Database History||Database History||Extended||Linux, Mac||$HOME||^\.(psql|mysql||
|Cron Settings||Cron Files||Extended||Linux, Mac||/etc/||cron.*||15||Unlimited|
|Cron Logs||Extended||Linux, Mac||/var/log||cron.*||15||Unlimited|
You can create custom file collector sets.
- From the Threat Response menu, click Management > Live Response. Click Create > File Collector Set.
- In the General Information section, provide a name and description for the file collection set.
- Click Create File Collector.
- Provide a name for the file collector.
- Provide a path for files to collect. Paths support environment variables and regular expressions. For more information, see Regular expressions and environment variables.
- Provide a file pattern for the files to collect. File patterns support regular expressions. For more information, see Regular expressions and environment variables.
- Specify the maximum depth of directories to recurse from the path you provided.
- Specify the maximum number of files to collect.
- Select Raw to preserve the format of the files that are collected.
- Select the operating systems from which you want the file collector to collect files.
Click the check mark in the top right to save the file collector.
- Click Save.
You can configure scripts to run on endpoints when you deploy the collection. Supported scripting languages include PowerShell and Python.
- From the Threat Response menu, click Management > Live Response. Click Create > Script Set.
- In the General Information section, provide a name and description for the script set.
- Under Scripts click Add a Script.
- Provide a filename for the script.
- Select Python or PowerShell as the type of script.
- Provide any script arguments to use as part of running the script.
- Add the script source.
- Click Save.
Script output is saved in a file that has the same as the script, and has -results appended to the file extension. For example, a script named test.ps1 creates output in test.ps1-results. All standard output is directed to the collector directory.
To collect data from endpoints, deploy a Live Response package.
To prevent resource overload on endpoints, only issue this action manually. Do not create a scheduled action.
- From the Threat Response menu, click Management > Live Response. Click Generate Packages.
- Target endpoints for data collection. Use an operating system-based question, for example: Get Computer Name from machines with Is Windows containing "True" .
- Select the endpoints from which you want to collect data and click Deploy Action.
- In the Deployment Package field, type Live Response.
- Select the package that matches the collection and destination settings that you want to deploy.
In the Base Directory field, provide a directory name where files are placed as they are collected. This directory is created under the Remote Path value that you provide in the destination you are using for the Live Response package. For example, if you provide a Base Directory of MyCollection for an SSH destination where the Remote Path is FileCollection, the result would be /home/username/FileCollection/MyCollection since the remote path provided in SSH destinations is relative to the home directory of the present user. Depending on the type of destination, the location of the Remote Path can vary. For example, in SMB destinations it is explicit; whereas in SSH destinations it is relative to the home directory of the present user.
Optionally select Flatten Output Files if you want all collected files placed in one directory where the filename includes the original path, but does not retain the folder structure.
- Click Show Preview to Continue.
- After you preview the list of endpoints to which the action is being deployed, click Deploy Action.
Threat Response tests the connection by writing a LRConnectionTestfile to the destination. If the write fails, the action tries the other destinations in the transfer configuration in the order they are listed in the configuration file. If all the connection tests fail, the action does not proceed.
Tanium shows the package as complete almost immediately after the package is downloaded on the endpoints. This completion is not accurate because Live Response runs in detached mode. File transfers continue after the action completes.
The actual time to complete the transfer depends on the endpoint activity and connection speed between the endpoint and the destination system.
Data that is transferred to a destination is packaged in a ZIP file. For example, if you selected memory details as an included module, Live Response creates a ZIP file that contains a raw memory dump and additional system files. You can analyze this data with a tool such as Winpmem or Volexity Surge.
In addition to the standard action logs on the endpoint (Tanium_Client_Location\Downloads\Action_###\Action_####.log), a log file of activities resides in the same directory. This file follows the naming convention: YYYYMMDDhhmm_LR.log.
When collection completes, the YYYYMMDDhhmm_LR.log is copied to the destination. The action log is not copied to the destination.
Use both the action log and the Live Response log to troubleshoot problems. The action log captures messages written to standard error (stderr).
Paths and file patterns support regular expression syntax.
The File Pattern regular expression is applied to the file name only.
The following table provides some example patterns to show how Live Response uses both regular expressions and environment variables on Windows, Linux, and macOS endpoints.
|Example Live Response task||Operating system||Path||File pattern||Explanation|
|Collect host file||Windows||%systemdrive%\windows\|
|^hosts$||Windows applies regular expressions to file name.|
|Linux/macOS||/etc||hosts$||In this example, hosts matches.|
|Collect Bash History of every user||Windows||Not Applicable||Not Applicable||Not Applicable|
|Linux/macOS||$HOME||/.bash_history$||A file name that matches .bash_history. |
|Collect a file names findme.txt from platform root||Windows||C:\||^findme.txt$||The filename starts withfindme.txt|
|Linux/macOS||/||^findme.txt$||The filename starts with /findme.txt|
Any environment variables that you use resolve as described in the following table.
|Environment variable||Supported operating system||Corresponding value|
|%taniumdir%||Windows||The Tanium Client directory. Defaults are:|
\Program Files\Tanium\Tanium Client\ (32-bit OS )
\Program Files (x86)\Tanium\Tanium Client\ (64-bit OS)
|$TANIUMDIR||Linux, Mac||The Tanium Client directory. Defaults are:|
All user home directories that do not have a home directory set to blocklisted shells, and match shells that are listed in /etc/shells file.
If there is no /etc/shells file, all shells are allowed.
|Environment variables that are local to the endpoint are supported. For example, if %SYSTEMROOT% is set on an endpoint to expand to C:\WINDOWS, you can use such a variable on a path.|
Last updated: 2/23/2021 10:58 AM | Feedback