Quantcast
Channel: Questions in topic: "datamodel"
Viewing all 226 articles
Browse latest View live

How to write datamodel query with lookup field value.

$
0
0
index=websense | lookup Websense_Disposition_Lookup Disposition_ID AS disposition OUTPUTNEW Action AS Action | search Action=Permitted | eval bytes_in_GB=round(bytes_in/1073741824,2) | stats sum(bytes_in_GB) AS download_size_GB by category | sort - download_size_GB | head 10 Above is my normal query, but i am unable to write the same using Datamodel. facing two issues... 1. Action field collection is using a lookup. 2. data conversion in GB. please help me to write the datamodel query. Datamodel-Web

How to deal with curly brackets in field names creating a data model

$
0
0
Hi, I was working with JSON data. (Example here: http://www.splunk.com/web_assets/hunk/Hunkdata.json.gz) The data is stored in Hadoop HDFS (Download e.g. Hortonworks HDP Sandbox and trial version of Splunk Analytics for Hadoop). Example event: {"customer": {"city": "SACRAMENTO", "zip": "95819", "firstName": "ERWIN", "accountNumber": "900401544", "lastName": "HARRELL", "address": "831 Maverton Dr.", "phone": "5215464018", "state": "CA", "sex": "M", "age": "55"}, "timestamp": "2013-09-01T00:01:05", "servername": "dash.5.woc.com", "charactertype": "Curd Cobbler", "items": [{"category": "armor", "itemid": "DB-SG-G01", "price": 25.0, "description": "'Vegan Friendly Gloves'"}, {"category": "tools", "itemid": "AB-TR-N89", "price": 135.0, "description": "'Robotic Cow Milker'"}, {"category": "tools", "itemid": "AB-TR-N89", "price": 135.0, "description": "'Robotic Cow Milker'"}, {"category": "cheese", "itemid": "ST-RF-M04", "price": 20.0, "description": "Manchego"}, {"category": "tools", "itemid": "CU-PG-G06", "price": 65.0, "description": "'Cheese Board of Glory'"}], "total": 380.0, "type": "purchase", "region": "Limburgerland"} INDEXED_EXTRACTIONS does not work (because it's "search time" if you deal with Hadoop). You can use KV_MODE=JSON in your sourcetype definition. Sample data includes array fields like items{}.category after auto extraction. If you want to create a data model you'll get an error message if you choose to add "items{}.category" as an auto extracted field. There is a "new" option called JSON_TRIM_BRACES_IN_ARRAY_NAMES https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf#Structured_Data_Header_Extraction_and_configuration Unfortunately this option is index time only and does again not work with data stored in HDFS. (But you can try to ingest the example data with Splunk Enterprise and it should work). In addition this feature has some issues with SPATH compatibility: "Note that enabling this will make json indextime extracted array fiels names inconsistant with spath search processor's naming convention." Long story shot: Use FIELDALIAS to rename the field with curly brackets. This is a search time option and will present the "working" field name in addition to the "non working" version if you click on "add field: Auto-Extracted". Example: [json:hunkorders] FIELDALIAS-items=items{}.category AS items.category,items{}.description AS items.description,items{}.itemid AS items.itemid,items{}.price AS items.price (Yes, you can define multiple rename statements in one line). You don't have to do it on the command line. Select "Settings/Source Types" and you are good to go. Feel free to comment or answer this article if you have other or better ideas. Greetings, Holger

Why eStreamer data from sourcefire is not getting tagged for IDS_Attacks datamodel?

$
0
0
Hi, We are indexing eStreamer logs from sourcefire and have the app, "eStreamer for Splunk" (2.2.1) and add-on, "Splunk Add-on for Cisco FireSIGHT" ( 3.3.2) installed on both Indexer and the Search-heads. But when searching for sourcetype=eStreamer, I do not see any tags getting added to the events. no eventtypes either. As per the Add-on documentation, IDS events should be tagged as "ids" & "attack". Also, the following query against the Data model does not return any results. | datamodel Intrusion_Detection IDS_Attacks search | search sourcetype="eStreamer" I do see that few fields are correct as per CIM, e.g. ides_type, but not all. e.g. data still returns with dest_ip instead of dest. This is despite the following fieldalias object being present. ![alt text][1] [1]: /storage/temp/183178-dest-ip-estreamer.png Are we missing any additional setting/configuration or Am i searching the datamodel in a wrong way? Any help would be appreciated. Many Thanks, ~ Abhi

Why are data model metrics not showing up with this search?

$
0
0
The following searches work : | tstats `xxxx_summaries_only` avg(All_Performance.Memory.swap_free) AS swap_free FROM datamodel=COY_Performance WHERE nodename="All_Performance.Memory" AND All_Performance.dest="hostname-11" | tstats `xxxx_summaries_only` avg(All_Performance.Memory.swap) AS swap FROM datamodel=COY_Performance WHERE nodename="All_Performance.Memory" AND All_Performance.dest="hostname-11" This doesn’t work | tstats `xxxx_summaries_only` avg(All_Performance.Memory.swap_used) AS swap_used FROM datamodel=COY_Performance WHERE nodename="All_Performance.Memory" AND All_Performance.dest="hostname-11" But via the pivot on the datamodel, I do see metrics from "All_Performance.Memory.swap_used". Any reason why my search returns nothing for | tstats `xxxx_summaries_only` avg(All_Performance.Memory.swap_used) AS swap_used FROM datamodel=COY_Performance WHERE nodename="All_Performance.Memory" AND All_Performance.dest="hostname-11"

Why cant Enterprise Security App see data from a specific index despite having correct tags?

$
0
0
Hi, When I search all indexed data against "Intrusion Detection" data model from Search & reporting app's context, Splunk can correctly identify data from Imperva and eStreamer both, based on the tags ids, attack. ![alt text][1] But when I run the exact same search from context of Enterprise Security, only data from Imperva is returned. It does not see eStreamer data. ![alt text][2] I have verified that under CIM Setup for "Intrusion Detection" data model, there are no restrictions on which indexes it can search. Also, knowledge objects which are normalizing eStreamer data do have global permissions. What else could we be missing? Many Thanks, ~ Abhi [1]: /storage/temp/183202-datamodel-ids-searchreporting.jpg [2]: /storage/temp/183203-datamodel-ids-es.jpg

Converting JSON results to DataModel structure

$
0
0
Hello, Splunkers! I have a REST query resultset and would like to kind of "convert" it to a DataSet structure to automatically create a DataModel that fits perfectly my homemade application logs. Does anyone know how I can do this? Thank you!

How to search Data Models with Javascript in a Search Manager or through a Data Model Object?

$
0
0
I've created a data model and want to search it in my external Javascript. For my first attempt, a SearchManager would not start the search using the data model query: var datamodelSearch = new SearchManager({ id: "datamodelSearch", search: '| datamodel test_commits commits search | where Commit = $commithash$ | head 5 ', earliest_time: '-30d' latest_time: 'now' preview: false, cache: true }, { tokens: true }); datamodelSearch.on('search:start', function() { console.log('DM STARTED!!!'); // would never get here }); On a second attempt, I was trying to use the DataModelObject class, following this documentation: http://dev.splunk.com/view/javascript-sdk/SP-CAAAEY8#workwithobjects var service = mvc.createService({ owner: "nobody" }); service.dataModels().fetch(function(err, dataModels) { var object = dataModels.item("test_commits").objectByName("commits"); object.startSearch({}, "| head 5", function(err, job) { console.log("The job has name:", job.name); job.results({count: 5}, function(err, results, job) { console.log("Fields: ", results.results); // results would be null }); }); }); This second search created a search job with a search id, but I was not able to pull the results from the job. However, if I looked up the search id in the job inspector, I would correctly see 5 results. Could anyone help me out?

Datamodel search with Datamodel Subsearch Circular Dependancy Error

$
0
0
How do I fix this search to avoid- 'Error in 'SearchParser': Found circular dependency when expanding datamodel=Intrusion_Detection.Network_IDS_Attacks' |datamodel Intrusion_Detection Network_IDS_Attacks search | search index=alienvault earliest=-0d@d latest=now |eval ReportKey="today" |append [|datamodel Intrusion_Detection Network_IDS_Attacks search |search index=alienvault earliest=-1d@d latest=-0d@d |eval ReportKey="yesterday" |eval _time=_time+86400] |timechart count by ReportKey

Error is showing while hitting endpoint for datamodel.

$
0
0
I need default configuration for datamodel which are globally defined.For that I am using following URL. https://:8089/servicesNS/nobody/test/datamodel/model/default This will throw the following error. Could not find object id=default Can anyone help me? Any small help would be appreciated.

How to create data model for top events seen in Splunk?

$
0
0
We are collecting logs from various sources. Volume of logs are huge, nearly 20 million per day. Each log source has different field names for events like EventName, Signatures, Event, name, CloudEvent. With such kind of data , I need to create datamodel so that when I will run a tstats search I will get the results quicker. Please someone suggest what steps I need to follow to create datamodel for top events. Do I need to have root event and root search both? If anyone can share parameters that I need to enter in root event constraints and root search, then that would be great. Thanks,

Splunk Stream "stream:dns" sourcetype and the CIM "DNS.answer" field

$
0
0
I am working with the Splunk Stream app to maintain a record of DNS queries. I was looking to check the returned IP address answer for each query, where present: by searching for the DNS stream events within the datamodel command for the Network Resolution "DNS" data model object, it would appear that the answer is given by the "host_addr" field in the stream events. It's not clear to me from the field definitions for the stream:dns sourcetype that this should be the case... "host_addr" is just described as Host IP Address. For anyone with experience of the DNS stream events and data model, is there a configuration file where this is specified as an alias somewhere, to set DNS.answer=host_addr? Also the "message_type" field for the DNS stream events is overwhelmingly multivalued with the values QUERY and RESPONSE. I assume this is just how stream events are logged for DNS but I'm finding it slightly confusing after using windows DNS debug logs as a point of reference, which are either one or the other. Any insight appreciated, thanks.

Splunk Common Information Model (CIM): Why is data model acceleration not working for Email data model?

$
0
0
We are running the latest versions of Splunk Enterprise, Splunk Enterprise Security, and Splunk Common Information Model (CIM) [SA_CIM]. I can enable acceleration for the Email data model, but it never goes past 0% built and always says "Building". I am not having issues with any other data model. If I search for `tag=email` like the data model constrains to, I get plenty of events (Cisco IronPort source). If I search the data model `| datamodel Email search`, it returns events. Yet acceleration (which drives the email dashboards) does not work. If I clone the Email model to Email_temp and accelerate the new one, it works fine. What could be the issue here? Thanks Craig

How To Use tstats with nested data models - getting empty results

$
0
0
I have a DataModel named "AccessLogs" and it has a DataSet hierarchy that looks like this RootSearchDS // sourcetype=http_access_log BusinessHoursDS // Child of RootSearchDS, Some filtering to only include Mon-Fri work hours BetaDS // Child of BusinessHoursDS, host=BetaServer* ProdDS // Child of BusinessHoursDS, host=ProdServer* I've enabled the DataModel to be publicly available and I've enabled acceleration for 1 day. Now, I'm trying to use the tstats function to return some results about my DataSets. I'm running queries over around 1-2 Terrabytes of data collected over 3 months. The normal pivots are very slow - a few hours to run - so I was hoping the tstats function would provide a faster alternative. I just need basic stats on my DataSets - like avg values segmented by week of the year. Here's my tstats command: | tstats count avg(ResponseTimeMillis) as "AvgResponse" FROM datamodel=AccessLogs.RootSearchDS WHERE nodename=RootSearchDS.BusinessHoursDS.BetaDS by TimeWeekOfYear I can see the count field is populated with data but the AvgResponse field is always blank. It looks like this field doesn't exist. But when I pivot off my data model AccessLogs > RootSearch > BusHours > Beta I can see that the ResponseTimeMillis field does exist. Anyone know why the avg(ResponseTimeMillis) might be blank? Did I specify my "datamodel" and "nodename" parameters correctly? Does ResponseTimeMillis need to be set as a "required" field in my DataSet?

How to check if there is no data/or extraction is improper or not done in the index used by datamodels

$
0
0
I want to create alert for the data that is being used by datamodels, if index has no data or there is some missing extraction trigger an alert. My idea is to search the data into the datamodels every 24 hours to identify if can see data from those indexes within the datamodel. But for this have to create separate alert for all datamodels tstats dc(count) from datamodel=Application_State by index| streamstats count as row by index| fields row index |stats sum(row) as total search total = 3 As Application_state has 3 index if there is no total 3 then trigger alert Any suggestions on how it can be achieved

How to edit my data model search to reference a lookup table?

$
0
0
Hi All, I am working on developing a search in Splunk Enterprise Security that will reference a lookup table named "Blacklist.csv" which contains a list of blacklisted IP's under a field called "IP_Blacklist". I have so far written a search to reference more than one data model. The issue is im not getting any matches against the Blacklisted IP list. There is at least one match that should be brought up. My current search: | multisearch [| datamodel "Network_Traffic" "All_Traffic" search] [|datamodel "Authentication" "Authentication" search] [|datamodel "Web" "Web" search] | lookup Blacklist.csv IP_Blacklist

datamodel query with time specifier for DB_Output

$
0
0
I'm having a search query with datamodel command, and I want to use the results of this query in Db_Output. The query should be run specific time range. The problem is, after configuring the DB_output with the datamodel query and cron, it always run the query on 'All Time' instead of defined time. (earliest=-1d@d latest =-0d@d) Is there any way I can define time specifiers with datamodel. (tstats or stats can not be used, as query is not populating any statistical result) query example, (query is using only listed commands) | datamodel ... ... search | rename | eval | fillnull Thanking you in advance. Let me know if any other details are required.(DB Connect 2.4)

Risk Analysis datamodel empty / dashboard blank

$
0
0
So, I may be misunderstanding how this works but from reading the blogs and documentation about Risk Analysis there are many ways of getting risk data into Splunk but one of the ways that should work out of the box should be enabling a correlation search and giving it a risk score and risk object type. We've done that and have had several events trigger but the datamodel (and index=risk) remain empty. I also created an ad-hoc risk entry but the statement of there being no data also remains true. This is all leading me to believe I've missed something crucial. Anyone have any ideas? This is the documentation I'm referring to: https://www.splunk.com/blog/2014/08/12/risk-analysis-with-enterprise-security-3-1/ http://docs.splunk.com/Documentation/ES/4.7.1/User/RiskScoring http://docs.splunk.com/Documentation/ES/4.7.1/User/RiskAnalysis

Search a Splunk Enterpirse Security DataModel - problem with Wildcards

$
0
0
Im trying to limit my search down to just certain accounts from the the authentication Data Model but wildcards dont seem to limit the results as I'd normally expect when search a specific index instead of the DM. I've tried a few options which I'd have hoped would work, but it just returns ALL account names; | datamodel Authentication Authentication search | search Account_Name="abc*" | datamodel Authentication Authentication search | search Account_Name="*abc*" | datamodel Authentication Authentication search | search Account_Name=abc* | datamodel Authentication Authentication search | where like(Account_Name,"abc%") Is there a particular way you should use a wildcard within a DM search? Thanks.

Datamodel Rebuild status/detail information about the model's acceleration is unavailable

$
0
0
Anyone facing this issue? I did a Rebuild of Datamodel and i used to see the rebuild detailed status (like below Image), but it is missing now.. Any idea?? ![alt text][1] [1]: /storage/temp/206617-acc.png

Renaming auto extracted fields

$
0
0
After parsing my json fields the auto extracted fields have format like this a{}.b and a{}.b{}.c and so on. When i try to add auto extracted field to data model I'm getting an exception, "Field Name can not contain whitespace, double quotes, single quotes, curly braces or asterisks. " And this exception makes sense as my auto extracted field name contains curly braces, so how can i remove curly braces. I tried to use the concept of field alias as mentioned in https://answers.splunk.com/answers/307993/is-there-a-bug-in-splunk-6-with-adding-an-attribut.html. But I'm not able to add field alias in Data Model, Is there an example how to add field alias in Data Model.
Viewing all 226 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>