Boston here we come! For those who've not yet had the opportunity to travel to a Splunk .conf event, it really is an event to behold. Big corporations with stock market listings are often necessarily conservative and perhaps, a bit dull. Splunk has none of that going on, if you're a cyber security professional this is a great annual occasion to really look forward to.  

Boston Vs. Vegas

Two CND consultants were travelling out this year to .conf which had moved from Las Vegas to Boston, Mass. USA. This was very welcome, not just for a shorter flight and cooler time of year but Vegas can be a lot esp. at 45°C in the height of summer. Dare I say it also had a more professional feel to it in the vast Boston convention centre; there's something about walking past inebriated individuals flushing their cash away at a slot machine in Vegas that doesn't sit well on the walk into a tech conference. I covered conf23 in Vegas here (URL).  

Day 1: Keynote

Big on our list of goals this year was to get to grips with the very significant changes in Splunk Enterprise Security 8.x and of course I can't get past the introduction without a nod to Artificial Intelligence (is it AI or just ML?). Splunk is of course now a Cisco company and so the key note speeches (URL) on the opening day by the general manager, Kamal Hathi talked about the integrations and alignments with Cisco products. A good example of this product alignment was the announcement on free Splunk license metering for Cisco firewalls which could be quite attractive for enterprises using Cisco firewalls (personally I never enjoyed administrating them, not very admin friendly compared to competitors such as Palo Alto or Juniper).

There was also a significant focus on the concept of a Splunk Cloud data lake and the necessity of this at hyperscale (or ludicrous as Kamal put it) to train AI models on threat detections centrally across customers. It sounds very sensible but I was left unconvinced that many customers would sign up for this despite promises of anonymised data. What did impress upon me through the speech, (URL) was the scale up in data bandwidth, data processing, CPU requirements and storage that the AI changes will bring about. It did present as a real paradigm shift forwards in the IT industry that the cyber industry needs to help protect, and with cyber professionals hard to train and recruit, turning to AI tooling to support this volume tsunami has to be part of the solution.

Day 2: Platform announcements and Tech sessions

Splunk Enterprise Security 8 was released last year and I think it fair to remark that it wasn't the smoothest landing; for a start there have been a lot of changes to terminology used with a table of past and present terms now required; even as an accredited ES admin I have found it tricky to remember the nomenclature changes for example a 'Correlation Search' is now an 'Event-Based Detection'. Another challenge was the underlying KVstore and MongoDB changes from 4.x to 7.x, this is not strictly ES but it has been lumped in with the ES changes.

ES 8.1 was released to coincide with conf 25 and all the headlines sounded very exciting e.g. 66% faster UI through improved efficiencies, runbooks and response plans integrated to the SOC workflow and version controlled detection tunings all built into an enhanced UI through Mission Control. I was very keen to get ES 8.1 installed and tested; many of the technical sessions myself and my colleague would attend related to ES. 

SEC1123: Enterprise Security 8.1: Enhanced Detection and Investigation for the SOC

This was an interactive session that was fully subscribed and included interactive labs working directly on the new ES version. It was good but somewhat rushed and as the first session out of the gate after key notes it felt somehow distracted. Regardless it was good to get hands straight onto the new product. (URL)  

SEC1638: From Request to Response: Mastering Security Data Onboarding

This session by Splunk Security Architect Duncan Goff was straight-forward but landed really well. The key here is that often customers just try to wedge in all their data without necessarily considering the use case and lifecycle for it. Granted if it is DNS or firewall logs then it almost always makes sense to onboard, but other log types should be considered and have a runbook for actions on, associated with it. Duncan walked us through the OMM framework which definitely left an impression on us for use during consultancy. (URL inc. video)

SEC1337 – Splunk Enterprise Security 8: AI Era Defence for Modern SOC's

Presented by Marquis Montgomery who is a bit of a legend within Splunk having been a principal contact for ES for many years and he used to deliver the platform training. I was really looking forward to this session detailing ES 8.1 changes. I would summarise my takeaways as:

  • AI is highly likely to be leveraged in malicious attacks
  • SOC's are already overwhelmed, unmanageable volume of alerts
  • ES 8, intended to be unified TDIR platform platform
  • Better signal to Noise / Richer Context / Faster decisions
  • Updated use case library
  • Detection versioning, allows non-destructive editing of the default mode
  • Also adds a detection diff view, side by side code view
  • Full featured editor mode
  • Enrichment with Threat Intelligence & SOAR Automation
  • TruSTAR built into ES8 for intelligence feeds
  • Playbooks for SOAR now built into ES8.1 (premium), this is great as it will give real working exposure on SOAR playbooks and practices.
  • Response plans, helps standardise the workflow and organisational SOP's means docs are not elsewhere and with assignable given tasks, allowing collaboration.

Session slides (URL) Session Video (URL

SEC1668 – Blazing-Fast Security Ops: Unleashing Splunk ES 8.0 for Speed and Scale

Another session covering ES enhancements, really detailed on some methods to improve performance. Slides (URL), Video (URL). My summary:

  • Focus on performance and scale
  • How Splunk can help you
  • How you can self help

  • Performance ingredients
  • ES (Front End (JS), Search (SPL), Backend (Python), KVStore
  • Platform ( Web server, Search infra, Layout and API's)

  • 4 Pillars
  • Goal: Fast detection / Fast time to ack / Scale with Growth in data
  • Pillar: Experience (how analysts interact with the page, speed of load, analyst queu)
  • Pillar: Efficiency (parallel load, ms load times, SID caching etc)
  • Lean KVstores, this will also impact replication
  • Pillar: Velocity (API optimisations, custom commands are only on SH, Macro, distributed to peer tier) this is 15x faster.
  • Cost of enrichment, macro was invoking enrichment, index= was 50% faster.
  • The outcome was to remove 5 lookups, 10 joins and then 30% faster than it was.
  • Enrichments can be expensive to achieve
  • Ask what search we can do on the IDX instead of just the SH

  • - Up to 67% faster overall in ES 8 with all the combined improvements

DEV1136 – Everything you didn't know about metrics in Splunk Platform

 I think metrics is taught very minimally through the Splunk professional education course, or certainly was when I went through it in 2020. I've never had a customer ask me about it either but I appreciate that for some events this could be a far better fit and more efficient licence meter wise. This was an interesting session that focused a little bit on hacks to wedge more data into a metric index and still cap out at 150bytes. Interesting side knowledge even if not immediately usable. Slides (URL), Video (URL). My notes:


# Metrics
  • floating point number storage
  • Max of 150 bytes per events
  • multimetric format, can lead to savings on licence ingest

## notes
  • Metrics only store to the second by default, not sub-second
  • Licensing:
  • Metrics different, more complex and can be more expensive
  • Metrics do use less disk space, can be up to 4x less.
  • Metrics have no _raw journal file, unless you have an IDX cluster
  • Can be means to improve the stored search

  • strings.data and merged_lexicon.lex
  • These perform differently in metrics
  • Can use HEC, this allows the index, host, source, sourcetype.

## mstats command
  • New command, very efficient
mstats avg(duration) where index=foo by host span=600 chat=t

Happy Hour in the Pavillion

Having sat through 5 lectures it was good to cutaway to some of the novel / fun exhibits at conf including a guitar stand simulating a manufacturer monitoring production with a Splunk edge hub. It was all good fun and genuinely relaxed, great to meet other spelunkers whom we'd worked with over the years. There was even a novel beer dispenser with a Splunk dashboard, I may have partaken.

Day 3: Tech Sessions and Search Party

The day started early on another very pleasant day in Boston, I'd been out early and run to Bunker Hill and back on the Freedom trail covering some 9km. We took the bus down to the conference centre which was only about a mile and a half from the hotel district where many Splunkers were staying. The sessions started at 0830 and you could feel the energy of people cutting around trying to get to their sessions in good time. I started with SEC1494.  

SEC1494 – Splunking around the Phishmas Tree

An easy start to the data with an engaging presenter who talked through an investigative process using domain data an custom search to pivot on Threat Actor data to locate domain generation algorithms that matched a given TA. I really enjoyed it and it got us thinking about search efficiency and alternative methods. I did point out that using a custom command prevents streaming searches to the indexer tier, which can be inefficient at scale. Slides (URL), Video (URL).  

DEV1408 – Transform Splunk Enterprise Management with Integrate CI/CD pipelines

Using Git and CI/CD practices when managing Splunk conf files, TA's and Apps is a key skill and a usual part of any given Splunk PS day for me. I would also add that all my more successful clients are using git, whilst those who are not tend to struggle locating and maintaining their golden source of code and prone to more config errors.

I was keen to see how others maximise their efficiency and usage of Git. In some ways this session was more introductory and covered how one US company and their internal Splunk team use git. It was good, but the presenter was not using best practices and just using the default 'search' app for hosting all their saved searches, they should have created their own discrete organisation app and probably split this into several discrete elements. Regardless I enjoyed the session. Slides (URL), Video (URL)

SEC1474 – Splunk and MITRE ATT&CK: Everything Covered? How We Know.

This covered some interesting points around coverage and exposure. Again it walked security engineers back to really looking at what data they have, what their threats are and then identifying which detections they need to fill in a gap analysis type method. My notes?

# finding and labelling data

  • What Data Source ID's from MITRE are we associating with?
  • In Security essentials there is a lookup that we can use

<pre>

| inputlookup mitre_enterprise_list

</pre>

## ADS framework


Session slides (URL), Session video (URL).

SEC1224 – Mastering the Detection Engineering Lifecycle: From Data Ingestion to Detection Triumph

This was one of my favourite sessions and the two women who presented were clearly very strong technically and experienced, it was a pleasure to listen to and made me want to go away and really work through detection engineering. I took a lot of notes in this class, too many to repost in full here but if you just pick one video to watch from conf this would be it. Class (URL), awaiting video at time of writing. A summary of my notes:

## Detection Lifecycle: Key Stages and considerations

  • Define the objectives
  • What are we trying to find?
  • Identify the requirements
  • What is necessary in the environment to achieve the objective?
  • How do we go about detecting X
  • What log sources do I need for the detection
  • Is the data complete
  • Does the data need to be enriched or correlated with TI?
  • What detection logic should I use? Whats the response plan on detection
  • Implementation, Testing & Validation
  • Continuously Monitor and tune
  • What is the lifecycle of this detection, how do we maintain it?
  • Reporting and Metrics
  • How effective is this detection engineering
  • How do we report our coverage and effectiveness?

PLA2018 – Deployment Server Reunion Tour

This was sadly the last session I attended and was somewhat more casual hosted in one of the theatre areas on the main conf floor. Essentially a lot has changed on the DS in the past few years and no longer do you have to awkwardly scale the DS via a sort of waterfall method using 'noop' as taught in core implementation. There is now an inbuilt scaling facility that can be readily used. I'd be keen to lab this out as it is a rare thing to implement at scale at a client. Session slides (URL).  

Search Party

I love that Splunk hire a band or singer to close out their conferences, in the past they've had Gwen Stefani, Snoop Dogg and others! This year they had the band Weezer and rented the MGM Music Hall at Fenway Park baseball stadium. It was a neat venue and despite being filled with nerds (yes, myself included) it had a hum of energy and fun around it. Whacky dancers with CRT tellies on their heads cut about throwing shapes, and themed cocktails flowed freely. We had a great time and tried to start a mosh pit to Enter Sandman which was covered and enjoyed a good social with fellow Splunkers. Great end to the conference and particularly enjoyable this year.