Hi,
We are trying to set up the Linux Auditd App on one of our Search Heads. Currently, there are two Indexers getting Auditd related data and both have linux-auditd and TA_linux-auditd Apps installed. App's have been configured to correctly Index the Audit data in the sourcetype needed, i.e. linux:audit.
Both Indexers have been added as a search peer and are connected to the Search Head. When we use the Linux Audit App on the Search Head, it is only able to see data from first indexer.
SH itself can see both Indexes properly and this was verified using a generic search for `sourcetype=linux:audit`.
But when we check the auditd_indices, upon which the App relies, it only sees the Index from first indexer.
so, on the Search Head,
`sourcetype=linux:audit | top index` will list both Indexes
But `| inputlookup auditd_indicies` will only list one.
On the Search Head, apart from linux-auditd and TA_linux-auditd we also have SA-LinuxAuditd installed to help with correlation. Are there any additional steps involved to make sure auditd can look at all indexes?
When we do the search for auditd_indices locally on the Indexers, the one having issues only reports back a "*" instead of the Index name. The first Indexer correctly lists the underlying Index for the same search.
![alt text][1]
Thanks,
~ Abhi
[1]: /storage/temp/136277-auditd-indices.png
↧
Why is the Linux Auditd app unable to see all indexes?
↧
How do I add a time range to a datamodel search that cannot use tstats?
I have a data model where the object is generated by a search which doesn't permit the DM to be accelerated which means no tstats. I am wanting to do a appendcols to get a delta between averages for two 30 day time ranges. The search I am trying to get to work is:
| datamodel TEST One search
| `drop_dm_object_name("One")`
| dedup host-ip plugin_id
| where severity > 0
**| where earliest=-30d@d latest=+0s**
| stats dc(plugin_id) AS signature_count by host-ip
| stats avg(signature_count) as current_avg
| appendcols [| datamodel TEST One search | `drop_dm_object_name("One")` | dedup host-ip plugin_id | where severity > 0 | **where earliest=--60d@d latest=-30d@d** | stats dc(plugin_id) AS signature_count by host-ip | stats avg(signature_count) as historical_avg]
| `get_delta(current_avg,historical_avg)`
I know the bold items are incorrect, but I am using them as place holders to show the time ranges that are broken out.
So what the heck am I doing wrong? Thanks in advance for any help.
↧
↧
How do I convert this search into a tstats search leveraging the web datamodel?
Here's the search:
index=proxysg sourcetype=proxysg | replace \*pandora* with www.pandora.com in url | replace \*facebook* with www.facebook.com in url | stats sum(bytes_in) as MB by url | eval MB=round(MB/1024/1024,2) | sort -MB
↧
Suggestion for data model acceleration page summary page of fire brigade application
Currently the query you use is:
| rest /services/data/models | search acceleration=1 | fields title, eai:acl.app | eval app_model_name='eai:acl.app' . " / " . title | eval dm_full_name="DM_" . 'eai:acl.app' . "_" . title
On my Splunk 6.4.0 instance that gets no results, in fact :
| rest /services/data/models
Shows me 12 data models, all within the search application (I am running from the fire brigade application FYI).
However, if you do:
|rest servicesNS/-/-/data/models | search acceleration=1 | fields title, eai:acl.app | eval app_model_name='eai:acl.app' . " / " . title | eval dm_full_name="DM_" . 'eai:acl.app' . "_" . title
I get the expected result, a number of data models which are accelerated, including those provided by the nmon application.
The query:
|rest servicesNS/-/-/data/models
Was provided by Splunk support while during investigating why the REST query would not return all the data models as expected.
Can you update the application please?
Also if you find this useful please vote...
Thanks
↧
Why am I getting eval command error "The arguments to the 'searchmatch' function are invalid" when using the datamodel command?
Added a root event object to data model as so:
index="main" host="*S100-L543*" source!="*geoip*" AND source!="*.xml" AND source!="*.config" AND ( _raw="*Exception*" OR _raw="*Stack Trace*" OR _raw="*Stack trace*" OR _raw="*stack trace*" )
Whenever I use this search through the search bar in the search app, it works with no errors, however, I get this error:
Error in 'eval' command: The arguments to the 'searchmatch' function are invalid.
when attempting to use the datamodel command as such:
| datamodel Exceptions_Data_Model exceptions search
↧
↧
Questions regarding datamodel, stats, NOT, and Macros in my query
This is the query I have:
| tstats `summariesonly` count from datamodel=Threat_Intelligence.Threat_Activity where NOT [| `ppf_subsearch_dm("ppf_threat_activity","threat_match_field,threat_match_value",now(),"Threat_Activity")`] by Threat_Activity.threat_key | `drop_dm_object_name("Threat_Activity")` | `get_threat_attribution(threat_key)` | stats sum(count) as count by threat_category | sort 10 - count
I have a couple questions regarding it:
1) What is the datamodel=Threat_Intelligence.Threat_Activity part doing? If it was just (for example): datamodel=Threat_Intelligence, then it would be counting from the data model node that is named "Threat_Intelligence" (if I'm not mistaken). So what does the .Threat_Activity do to it?
2)Similar to the first question, what is the "by Threat_Activity.threat_key" part doing? I believe the "by" means that it's aggregating by the field "Threat_Activity.threat_key". Again, what is the .threat_key doing there?
3) What is the stats sum(count) as count by threat_category part doing? I've read through the stats page on the Splunk reference site but I'm still not 100% sure what stats sum does. I believe that the other part of that command is renaming what stats sum(count) did as count and aggregating by the field threat_category.
4)Regarding the NOT operator, does the NOT apply to all of: `ppf_subsearch_dm("ppf_threat_activity","threat_match_field,threat_match_value",now(),"Threat_Activity")` ? Also, what is the square bracket doing there and why does a pipe directly follow the NOT operator?
5) Does anyone have any idea of what any of the macros are doing? I don't have the macro definitions for them and I also don't have access to them. I'm pretty sure that the `summariesonly' one directly following tstats just sets tstats to true. But other than that, I'm lost.
If anyone could help me with all or any one of the questions I have, I would really appreciate it.
↧
Does tstats always specify a datamodel?
Basically my problem is that I'm switching Splunk queries that I have into queries for a different search language. I don't yet have the capability to transfer the part of the search that specifies where to search, be it `datamodel=`, or a count by: IDS_Attacks.severity (grouping by the field severity within the parent node IDS_Attacks). So my question is: is `datamodel=` part of every search?
Side question, does anyone who has experience with Elasticsearch know if you can/how to transfer these datamodel specifications to Elasticsearch query language.
↧
What does datamodel do?
I really need help because I've read through the Splunk documentation on tstats and their datamodel pages and I am still really confused about them. Are they just collections of your available data?
Help would be appreciated
↧
When getting started with Linux Auditd, is it necessary to have a data model installed?
I have the "Splunk Add-on for Unix and Linux", the "Splunk App for Unix and Linux", and "Linux Auditd" applications installed. When I bring up the "Linux Auditd" and look for data, there is a lot of nothing. The command starts with `| tstats count WHERE [|inputlookup auditd-indicies] ...`
Does `tstats` require some kind of data model? If so, is the an existing one to use?
Thanks.
↧
↧
Does tstats use datamodels the same way Pivot does?
After reading through the Splunk documentation on pivot a few times, I noticed that it describes how it works with regards to datamodels and data model objects in a way that seems to imply that it's unique. This is what it says,
"How does Pivot work? It uses data models to define the broad category of event data that you're working with, and then uses hierarchically arranged collections of data model objects to further subdivide the original dataset and define the attributes that you want Pivot to return results on. Data models and their objects are designed by the knowledge managers in your organization. They do a lot of hard work for you to enable you to quickly focus on a specific subset of event data."
From what I know, tstats uses datamodels and data model objects in the same way. For example: tstats count(foo) from "datamodelname.objectname" would use datamodels the same way as the Splunk documentation describes how pivot uses them(I believe). I'm just unsure if the usage for both is the same because to me, it seems like the documentation seems to suggest that only pivot uses datamodels this way.
So what I'm asking is: does tstats use datamodels the same way that's described in the pivot usage documentation?
↧
What is the best practice for correlating events from multiple sources?
Hi,
I'm working on a use case with the purpose of investigating user activity over time from multiple log sources and then visualize this on a timeline (Timeline - Custom Visualization app)
Currently I'm combining data models (CIM) with append, but looking at performance this is not efficient and searches takes too long to complete with just a short time-frame specified. The search looks like this at the moment:
| tstats count from datamodel=Authentication where Authentication.action="*" Authentication.user="*" Authentication.user!="unknown" by _time,Authentication.action,Authentication.user,Authentication.app,Authentication.src,Authentication.dest | `drop_dm_object_name("Authentication")` | append [| tstats count from datamodel=Web where Web.action="*" Web.user="*" Web.user!="unknown" by _time,Web.action,Web.user,Web.app,Web.src,Web.dest | `drop_dm_object_name("Web")`] | append [| tstats count from datamodel=Network_Traffic where All_Traffic.action="*" All_Traffic.user="*" All_Traffic.user!="unknown" by _time,All_Traffic.action,All_Traffic.user,All_Traffic.app,All_Traffic.src,All_Traffic.dest | `drop_dm_object_name("All_Traffic")`] | transaction user | fields - *.app,*.user,*.action,*.src,*.dest
Only some of the data models are accelerated, such as Web. And my current understanding is that if I also add `summariesonly=true` to the Web data model in the search above, all other data models that are not accelerated will be excluded from the search?
The sourcetypes are known to me and I could use subsearches instead of data models, but believe this is the best I got to work with at the moment?
Any suggestions to improve the above search or taking a different approach for this use case?
Cheers!
↧
Splunk Enterprise Security: After adding some fields to the IDS data model, why am I not getting any results using the datamodel command?
Hi
I had added some fields to the IDS data model. First, I disabled the acceleration mode, then clicked on add attributes, added some four new fields to IDS data model, clicked rebuild on the IDS data model. After the building status showed 100%, I tried using the datamodel command:
| datamodel Intrusion_Detection IDS_Attacks search
I don't get any searches returned and the IDS dashboard panels on the Splunk Enterprise Security app doesn't seem to work either.
Any help in troubleshooting this issue is greatly appreciated.
↧
What's the difference between these two searches
These are the two queries:
| `tstats` count from datamodel=Authentication by _time,Authentication.action span=10m | timechart minspan=10m useother=`useother` count by Authentication.action | `drop_dm_object_name("Authentication")`
| `tstats` count from datamodel=Web by _time,Web.action span=10m | timechart minspan=10m useother=`useother` count by Web.action | `drop_dm_object_name("Web")`
So I can see that the only difference between the two is that where "Authentication" is in the first one, "Web" is in its place in the second one.
So The first difference is that they are counting from difference datamodels (Web and Authentication). But how is "Authentication.action" different from "Web.action"?
↧
↧
Is this search just counting the number of events in this datamodel?
This is the search:
| tstats count from datamodel=Authentication where nodename=Authentication.Privileged_Authentication by _time span=1h | timechart span=1h count
Is this search counting the number of events in the "Priviliged_Aunthentication" node from the datamodel "Authentication" grouped into 1 hour periods?
↧
How to add keepevicted=true in the datamodel or the query which uses datamodel (Data model has a transaction)?
Hi, I've created a datamodel which has a TRANSACTION. When I try to use the datamodel query for a longer period of time say 7 days , I'm seeing the following error.> Some transactions have been discarded.> To include them, add keepevicted=true> to your transaction command.
Query Used -
| datamodel abc abc_Transaction search
| search xyz
How to add keepevicted=true to the transaction command in the datamodel?
Thanks.
↧
How do I distinguish inbound vs outbound in the Web datamodel
I am trying to use the Web datamodel in Splunk ES. This datamodel seems to be missing the distinction between inbound web traffic and outbound web traffic. In fact it seems mostly to focused on inbound web requests. Am I missing something?
The distinction is important as inbound web requests are more indicative of external entity attack activity
while outbound web requests are more indicative of compromised systems.
I think that I can address the difference by my knowledge that different systems are the source for model inbound and outbound web request data, but it seems that this should be abstracted into the model somehow.
1) Can Splunk consider updating the Web datamodel to include some notion of inbound vs outbound?
2) How are other users of the Web datamodel dealing with this?
↧
Splunk App for AWS: How to have VPC Flow Log options available when creating searches in Splunk?
Hi Team,
We have configured Splunk App for AWS and configured VPC Flow Log to forward logs to Splunk.
We would like to have the options available (like vpc_flow.bytes, vpc_flow.interface_id, vpc_flow.vpcflow_action, etc) on VPC Flow Logs for creating Splunk searches, unfortunately we cannot find documentation.
What we are planning to achieve, if it is possible, is to use data model in Splunk for a custom visualization with flow log data.
↧
↧
After renaming an auto-extracted field in Data Model Editor, why am I unable to reference the renamed field when doing a tstats search?
I've tried this with multiple fields now and the same behavior occurs. What I want is simple:
To auto extract a field, and have it rename to something else so that I don't have to constantly pipe in a rename when I do tstats calls against the data model. Based on my understanding, when I set up the data model and give the field a display name, should this not essentially rename the field? Because this does not seem to be what it does, and in fact I have no idea where the Display Name ever comes into effect. I don't see it showing up anywhere aside from in the data model field list.
For instance, let's say I have a field "dimension" in a source that a data model is pulling in from via the constraints. Now I add this field via Add Attribute -> Auto Extraction and set the rename to instead be "status" instead of "dimension". Now, when I try to do a tstats call, it still only recognizes the field if I call it by the name of "dimension". If I try to reference it by "status" I get nothing. So am I misunderstanding what this rename is suppose to be doing, or is there some caveat that prevents my tstats call from recognizing that fields are supposed to be renamed?
↧
Why Malware Data Model is not working after Splunk Enterprise Security upgrade to 4.5 from 4.1?
Hi
I have upgraded Splunk Enterprise Security as per the documentation, but I see the Correlation searches using Malware Data-model is not working, but I can see the data in Data model Pivots. Anyone had this issue?
When I use the search its showing no events?
↧
How to use join with data model?
I have tried using join to detect the common field from lookup but i need not find the fields that are not present using data model query.
|inputlookup Denied_traffic.csv | join type=inner All_Traffic.src[| tstats `summariesonly` dc(All_Traffic.src) as src from datamodel=Network_Traffic where All_Traffic.src_zone=outside All_Traffic.app!=incomplete All_Traffic.action=dropped OR All_Traffic.action=blocked by All_Traffic.src]
↧