Quantcast
Channel: Questions in topic: "datamodel"
Viewing all 226 articles
Browse latest View live

Splunk Enterprise Security: How can I do a cidrmatch against a data model field?

$
0
0
I'm working with Splunk Enterprise Security and I'm trying to build/refine correlations against the Network Traffic Data Model. I want to exclude destination addresses in RFC1918 space. When working with the Data Model, how do you express the equivalent of NOT cidrmatch ("172.16.0.0/20", All_Traffic.dest) Every combination I try gives me the error Error in 'TsidxStats': WHERE clause is not an exact query

How do I search using a data model?

$
0
0
I've been working on a report that shows the dropped or blocked traffic using the interesting ports lookup table. I want to change this to search the network data model so I'm not using the `*` for my index. Any help on this would be great. Thanks. index=* action="blocked" OR action="dropped" [| inputlookup interesting_ports_lookup | fields dest_port] | table dest_port, dest_ip, src, app

How to build a datamodel like this ?

$
0
0
My data consists of pairs of files, lets call them file_A_1...file_A_n, and file_B_1...file_B_n, where file_A_1 is connected with file_B_1. The pairs are always ingested at the same time together. The first step I need in my datamodel is to join the corresponding pairs, like source=file_A_1 join type=outer myIDField [ search source=file_B_1 ]. How can I achieve this dynamically, with every pair of file?

Splunk Enterprise Security: Why am I getting "[indexer] The search for datamodel 'Threat_Intelligence' failed to parse, cannot get indexes to search"?

$
0
0
Hello, I have an error message in the threat activity dashboard in a Splunk Entreprise Security search head: [indexer] The search for datamodel 'Threat_Intelligence' failed to parse, cannot get indexes to search ! I disabled acceleration in the threat intelligence data model and I still have the error. Any help please?

How to do select * on a datamodel

$
0
0
I am trying to build a machine learning model using data from datamodels in Splunk. To build feature vector I need to do select * (sql) kind of queries on Splunk datamodel data. I could not find any such query in the documentation. Currently I am tweaking tstats query to do this, but I have to run an individual query on each of the attribute in the datamodel. This is very slow. Any other way to do this ?

Pivot Column Fields Disappear in results table for long date range

$
0
0
Here is the actual query: | pivot data_model_name object_name avg(response_time) AS "Average of response_time" SPLITROW _time AS _time PERIOD hour SPLITCOL somefield1 FILTER somefield2 is yes FILTER somefield3 isNot Unknown SORT 0 _time ROWSUMMARY 0 COLSUMMARY 0 NUMCOLS 0 SHOWOTHER 0 Running the query for individual date range: Day 1: SPLITCOL somefield1 table has values a,b,c,d Day 2: SPLITCOL somefield1 table has values a,b,d Day 3: SPLITCOL somefield1 table has values a,c,d Running the query for last 3 days date range: I get results only for column a and d. Columns b and c are not appearing the end result. Updating the NUMCOLS to a non-zero value shows all column values a,b,c and d in the final results table. | pivot data_model_name object_name avg(response_time) AS "Average of response_time" SPLITROW _time AS _time PERIOD hour SPLITCOL somefield1 FILTER somefield2 is yes FILTER somefield3 isNot Unknown SORT 0 _time ROWSUMMARY 0 COLSUMMARY 0 **NUMCOLS 1000** SHOWOTHER 0 Is this a bug in pivot?

Splunk CIM Network Datamodel

$
0
0
So I am writing an iRule for an F5 load balancer pair to log out LDAP usage. The log needs the following to meet the needs of the customer - trustedSever ip and port F5Self ip and port Destination IP and port I added a couple other fields just to match up with the networking models. **So I used this format -** %TIME% app=ldap vendor_product=f5 f5_irule=Splunk-irule-ldap src_ip=1.2.3.4 src_port=12345 port=5432 dest_ip=4.3.2.1 dest_port=636 direction=outbound protocol=ip transport=tcp action=allowed I still need to add the self information. But I assumed there would be a CIM recommended field for this? But I don't see one. Would it be this? I think that would be wrong since these are supposed to be gathered from an asset invetory system, right? dvc_ip=12.34.56.67 dev_port=232323 dvc_mac=ABCD I've noticed some app in Splunk base using macaddr="AA:AA:AA:AA:AA:AA" ipaddr="10.10.10.10"

Field values not appearing in data model

$
0
0
I'm trying to add uptime field values to the performance data model. I used an EXTRACT to create the field and can validate that each expected host is putting a value into the field. I then created an eventtype/tag to tag this field/value as os, uptime, and performance and can validate with `index=network uptime=* | table uptime, tag` that each expected host instance is populating the field and tagging it correctly. Yet when I look in the data model: | datamodel Performance All_Performance search | fields All_Performance* the uptime field does not exist, which in turn means the dashboard I'm trying to populate (a PCI dashboard) isn't working correctly.

Why am I getting Splunk IT Service Intelligence import error "datamodel model of KPI invalid, datamodel field must be a non-empty string"?

$
0
0
Does anyone know why I'm getting this ITSI import error? failed to import services from a successful backup. details : main machine was windows, remote. Json was correctly created with no unusual errors. On import, on mac, everything imports (glass tables, etc) EXCEPT services.... with the following error 2016-04-13 18:39:49,722 ERROR [itsi.kvstore.operations] [kvstore_backup_restore] [_write_data_to_collection] datamodel model of KPI invalid, datamodel field must be a non-empty string Traceback (most recent call last): File "/Applications/Splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 290, in _write_data_to_collection itoa_common.save_batch(write_instance, self.username, data, no_batch, self.dupname_tag) File "/Applications/Splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_common.py", line 448, in save_batch itoa_object.save_batch(owner, data_list, True, dupname_tag) File "/Applications/Splunk/etc/apps/SA-ITOA/lib/ITOA/itoa_object.py", line 217, in save_batch self.do_additional_setup(owner, valid_data_list, method=method) File "/Applications/Splunk/etc/apps/SA-ITOA/lib/itsi/objects/itsi_service.py", line 674, in do_additional_setup generated_kpi_searches = itsi_searches.gen_kpi_searches(gen_alert_search = True) File "/Applications/Splunk/etc/apps/SA-ITOA/lib/itsi/searches/itsi_searches.py", line 911, in gen_kpi_searches gather_filter_search = self.get_filtered_event_search() File "/Applications/Splunk/etc/apps/SA-ITOA/lib/itsi/searches/itsi_searches.py", line 586, in get_filtered_event_search self._get_filtered_event_search_parts(search_parts) File "/Applications/Splunk/etc/apps/SA-ITOA/lib/itsi/searches/itsi_searches.py", line 565, in _get_filtered_event_search_parts self._validate_datamodel_model(datamodel_model) File "/Applications/Splunk/etc/apps/SA-ITOA/lib/itsi/searches/itsi_searches.py", line 464, in _validate_datamodel_model raise ve ValueError: datamodel model of KPI invalid, datamodel field must be a non-empty string

Is there a way to get output similar to the list function that works in a datamodel search?

$
0
0
I am searching for some data for a user and the data file is huge, so the normal search and stats function is taking too long. I created a data model for this, but the problem is I can't use the `list(field)` function in a data model and have to use `values(field)` for getting the data. However, the values it picks up are unique values only. So for a user, if a certain value didn't change in subsequent records, it doesn't display that field, but I need values for all fields. It works fine, if there is 1 record per user. user1 02/15/2015 +000000000.00 +0000000.00 +00000.00 +0000000.00 1461740400 04/15/2014 +000000001.00 +0000001.00 10/15/2013 +000000002.00 user2 03/15/2014 +000000020.00 +0000000.00 +00001.00 +0000000.00 1461740400 user3 03/15/2014 +000000020.00 +0000000.00 +00001.00 +0000000.00 1461740400 user3 03/15/2014 +000000100.00 +0000001.00 +00002.00 user3 03/15/2014 +000000000.00 +00003.00

How to manage data models with REST endpoints?

$
0
0
Hello, I am trying to find the way to manage datamodels using REST endpoints: [http://docs.splunk.com/Documentation/Splunk/6.3.1/RESTREF][1] **May main objectives are:** - Launch datamodel rebuild operations from custom scheduled scripts (creating some admin scripts that would speak to splunkd and launch the acceleration rebuild) - Monitor datamodel acceleration states and alert under a few conditions (example: the datamodel acceleration state differs from 100% accelerated, a defined datamodel acceleration has been deactivated...) *I have found some related post in Splunkbase, like:* [https://answers.splunk.com/answers/326499/how-can-i-programmatically-monitor-data-model-acce.html][2] Unfortunately, I couldn't find a REST endpoint that initiates a datamodel acceleration rebuild. **Then, why does the rest endpoint:** | rest /services/data/models Seem to unable to retrieve datamodel information as long as it not globally shared? (in our case for example, very few datamodels are globally shared, most are shared only at the app level) Any help will be appreciated :-) Guilhem [1]: http://docs.splunk.com/Documentation/Splunk/6.3.1/RESTREF [2]: https://answers.splunk.com/answers/326499/how-can-i-programmatically-monitor-data-model-acce.html

Transaction on (accelerated) datamodel

$
0
0
Hello, i have a search like this:
index=XXX (sourcetype=XXX OR sourcetype=XXX) THREAT cs_component_id=XXX
[ search index=XXX (sourcetype=XXX OR sourcetype=XXX) THREAT cs_component_id=XXX pa_dst_ip=YYY pa_threat_id=\*Encrypted\*
| fields cs_component_id, pa_session_id ]
.......
| transaction cs_component_id pa_session_id maxspan=5m maxpause=5m
Problem is that we have ~ 5 Million Events per hour on this. And i want the Search run over a day/week, etc. So I made a datamodel... How can I do the search above in an (accelerated) datamodel? Kind Regards, Jens

Splunk App for Enterprise Security: How to edit the Threat Intelligience Data Model to include a field within the sourcetype?

$
0
0
So within the Enterprise Security App, there is the built-in threat activity dashboard. One of panels shows your sourcetype(firewall) and all the hits the events off that source type match up with a threat activity. What is not showing here is if the event that matched was blocked or allowed. From the query that was being run, I was able to figure out that it was using the Threat Intelligence data model. Within the firewall source type I have, there is a field called status that has passthrough or accept. When doing a search on `index=threat_activity`, there is no option here for the status field. I went into the data model of Threat_ Intelligence and searched for the action field using add attribute and saw some fields from my firewall sourcetype, but not status. I added status manually and waited for a refresh and still did not see it coming up. How can I properly edit this data model to include the specific field action so that I can then edit the query in the threat activity dashboard to filter out blocked events from allowed events?

Is there a quick way to list all fields in a data model within Splunk?

$
0
0
I've read about the pivot and datamodel commands. What I'm trying to do is run some sort of search in Splunk (rest perhaps) to pull out the fields defined in any loaded datamodel. I'm not trying to run a search against my data as seen through the eyes of any particular datamodel. In other words I'd like an output of something like DataModel Object Fields Web Web action, app, bytes, bytes_in, ... I'm not as concerned about the exact formatting as much as the list of fields. You can run something like this but the description field is a bear to go through | rest /servicesNS/-/-/datamodel/model | dedup title | table title description

Are wildcards with tstats on accelerated data models not possible?

$
0
0
I'm running a search that is something like this: | tstats values from datamodel=foo When the datamodel is not accelerated, I get all my data. When it is accelerated, no data is returned. If i specify the fields with `values(foo)`, `values(bar) `and so on, it works just fine. Does anyone know if wildcards or returning all values at once isn't supposed to work if the datamodel is accelerated? Any way to get around this? Thanks!

Pivot Reports - Why I cant select thrid level objects?

$
0
0
Hi all, I created a Data Model in Splunk which has three levels of objects. For example: 1.RDP Events 1.1 LSM Log Entries 1.1.1 Successful Session Logins In Pivot Report I choose "RDP Events". Why i cant choose "is_Successful_Session_Login" for adding as a column in my report? Is there something wrong in my approach? Please note that if I select LSM Log Entries I can select the object as a column for my report. Thanks in advance

Splunk Enterprise Security: How to modify the Top Infections Search to exclude results where signature=Tracking Cookies?

$
0
0
Can someone help me modify the Top Infections search? It is using tstats and a datamodel. I'm trying to exclude results where signature=Tracking Cookies, but usual exclusion methods aren't working with tstats and datamodel. The dashboard search is below. | tstats `summariesonly` dc(Malware_Attacks.dest) as dest_count from datamodel=Malware where * by Malware_Attacks.signature | `drop_dm_object_name("Malware_Attacks")` | sort 10 - dest_count This search results in a chart with the signature name in one column and dest_count in the next column. I would like to exclude results that match on signature that equals "Tracking Cookies".

Where can I find detailed documentation for using tstats with accelerated data models?

$
0
0
I'm starting to use accelerated data models to power some dashboards, but I'm having some issues. For example, after a few days of searching, I only recently found out that to reference fields, I need to use the . format and I'm still not clear on what the use of the "nodename" attribute is. My query to the Splunk sages: Where are these and other data model specifics documented?

Splunk IT Service Intelligence: Why am I getting datamodel search error "Unable to find tag oshost and tag performance"

$
0
0
| datamodel Host_OS CPU search | `aggregate_raw_into_service(avg, Performance.CPU.cpu_load_percent)` | `assess_severity(ac600b7a-5db7-49b9-a3b6-1535c31d7826, d307e18cac4d171a0539a07c, true, true)` | eval kpi="WebService KPI 18", urgency="5", alert_period="5" I have installed the Splunk IT Service Intelligence 2.1.0. When I am in the service editor to create KPI for CPU, I choose the KPI source as datamodel. Datamodel - HostOperatingSystem -CPU-cpu_load_percent. But when I click on the generated search, I get the "yellow" with the following messages: The specified search will not match any events unable to find tag oshost unable to find tag performance Am I missing any steps on the installation? It seems Tags are missing. How to correct it? Any help is appreciated. Thank you Ravichandran

Splunk App for Web Analytics: Why my does my data model have empty fields when I automatically index data?

$
0
0
Hello there. I'm having another issue with the Splunk App for Web Analytics... but I'm not sure where the problem is. I created a script that download some data and put this data in a directory. Then, Splunk gets this data in a batch mode and indexes it in some index. On the other hand, I have configured the WebAnalytics App and works fine, but seems to have some problem between the automatically indexed data and the datamodel because the panels go crazy and don't show real data... it's like they can't get data from the datamodel... or even this data was corrupted. With this context, I have done some tests: -- All is fine in the script logs -- All is fine in the splunkd.log -- When I do the search in the datamodel, I can't get any results (seems to be some fields empty) -- I restart the splunk deployment, without any difference -- When I rebuild the datamodel, the results are even worse -- When I use the Pivot to see the datamodel, some fields that has data before, now are empty (**http_session**, **http_locale**,**http_session_channel**, **http_session_duration**, **http_session_end**, **http_session_pageviews**, **http_session_referrer**, **http_session_referrer_domain**, **http_session_referrer_hostname**, **http_session_start**) -- If I delete the index with the data, create a new one and index the same data in it, after the configuration steps all works fine again. Someone has any clue? Regards.
Viewing all 226 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>