Proactively Responding to #CloudBleed with Splunk

cloudbleed

What is CloudBleed?

Cloudbleed is a serious flaw in the Cloudflare content delivery network (CDN) discovered by Google Project Zero security researcher Tavis Ormany. This vulnerability means that Cloudflare leaked data stored in memory in response to specifically-formed requests. The vulnerability behavior is similar to Heartbleed, but Cloudbleed is considered worse because Cloudflare accelerates the performance of nearly 5.5 million websites globally. This vulnerability might have exposed sensitive information such as passwords, tokens, and cookies used to authenticate users to web crawlers used by search engines or nefarious actors. In some cases, the information exposed included messages exchanged by users on a popular dating site.

Understanding the severity of Cloudbleed

clouldbleed-CDN_diagram

CDNs primarily act as a proxy between the user and web server, caching content locally to reduce the number of requests made to the original host server. In this case, edge servers in the Cloudflare infrastructure were susceptible to a buffer overflow vulnerability, exposing sensitive user information like authentication tokens. Technical details of the disclosure can be viewed on the Project Zero bug forum.

Evaluating risk with Splunk

The most obvious risk introduced by this vulnerability includes any exposed user data, which could be the same credentials users are using for corporate authentication. An easy way to enumerate the scope of this problem is to compare the list of domains using Cloudflare DNS against your proxy or DNS logs. This can give you some insight into how often users could be using the affected websites and the relative risk associated with using the same credentials for multiple accounts.

To do this analysis, we first need to download the list of Cloudflare domains and modify the file slightly so we can use it as a lookup.

$ git clone https://github.com/pirate/sites-using-cloudflare.git

Convert txt list to csv:
$ cat sorted_unique_cf.txt | sed -e 's/^/"/' > sorted_unique_cf.csv
$ cat sorted_unique_cf.csv | sed -e 's/$/","true"/' > sorted_unique_cf_final.csv

Using a text editor, change the first line of CSV file to add headers for the lookup:
"","true" to "domain","CloudflareDNS"

Finally, copy the formatted file to the lookups directory of the search app or a custom app that you use for security analysis.

$ cp sorted_unique_cf_final.csv /opt/splunk/etc/apps/security_viz/lookups/

After that step is complete, validate the lookup works:

|inputlookup sorted_unique_cf_final.csv

This might take some time because there are nearly 4.3 million domains in the lookup.

clouldbleed-inputlookup

The domains on the list are not fully qualified domain names (FQDN) so will be harder to match against your proxy and IPS logs that include subdomains. Use URL Toolbox to parse the DNS queries or HTTP URLs in your IPS or proxy logs.

cloudbleed-dns_queries

This is an example search of how to use URL Toolbox to parse Suricata DNS queries:

index=suricata event_type=dns | lookup ut_parse_extended_lookup url AS query

cloudbleed-ut_domain

In the example below, we are parsing DNS queries and comparing them against the Cloudflare lookup. When a domain in the DNS query events matches a domain in the lookup, that event gets a new field called CloudflareDNS with a value of “True”.

index=suricata event_type=dns
| lookup ut_parse_extended_lookup url AS query
| lookup sorted_unique_cf_final.csv domain AS ut_domain OUTPUT CloudflareDNS

clouldbleed-lookup-DNS

Although the above search is helpful to identify whether or not a domain uses Cloudflare DNS, now we need to go a step further and use the new field to see only DNS requests for Cloudflare domains.
index=suricata event_type=dns
| lookup ut_parse_extended_lookup url AS query
| lookup sorted_unique_cf_final.csv domain AS ut_domain OUTPUT CloudflareDNS
| search CloudflareDNS=true

In my environment, I am checking for about 4.3 million domains for a nine-month period (the entire span of my data). I am doing this to determine if a user has ever visited any of the domains. This is important because random user data was leaked, the credentials could have been compromised at any time.

Another trick I am going to use is to save the results to a CSV file, because this search can take a long time. Saving it to a lookup, allows you to review the results later and filter through the results without needing to run the search again.

index=suricata event_type=dns
| lookup ut_parse_extended_lookup url AS query
| lookup sorted_unique_cf_final.csv domain AS ut_domain OUTPUT CloudflareDNS
| search CloudflareDNS=true
| stats count by src_ip ut_domain
| outputlookup all_affected_domains.csv

clouldbleed-output_affected_domains

The final output from the search shows that during the time period specified, we had nearly half a million visits to websites using Cloudflare. Breaking it down by src_ip, we can see specific users were more frequent visitors to impacted sites than others. These users should change any of their reused credentials as a precaution.

Bonus: Use Dig to determine geography of impacted domains

Using a subset of data from the lookup, we can use sed and a script to dig for the ip_address associated with each domain:

Reduce our results to only the domains:
|inputlookup all_affected_domains.csv | dedup ut_domain
| rename ut_domain AS domain | fields domain
| outputlookup cloud-bleed_domain-only.csv

Remove the quotes surrounding each domain in the file:
$ cat cloud-bleed_domain-only.csv | sed -e 's/"//g' > my_cloud-bleed_domains.csv

Run this script to find all the IP Address for each Domain:
$ bash domains.sh

After running the script, you will have a new lookup called “dig_cloud-bleed_domains.csv” which you can further analyze in Splunk.
| inputlookup dig_cloud-bleed_domains.csv where "IP Address"=* | rename "IP Address" AS ip_address
| iplocation ip_address
| stats count by Country
| geom geo_countries featureIdField=Country | eval count="count: "+count

clouldbleed-bonus_affected_domains_geo

This choropleth map showcases the geographical location of each of the affected domains visited by users in my environment. The majority of the traffic was to sites based in the United States, with traffic to Germany in a distant second place.

from Splunk Blogs http://blogs.splunk.com/2017/02/28/proactive-response-to-cloudbleed-with-splunk/

From API to Easy Street

30? 20? …15?  It all depends on how well you know your third-party API. The point is that polling data from third-party APIs is easier than ever. CIM mapping is now a fun experience.

Want to find out more about what I mean?  Read the rest of this blog and explore what’s new in Add-on Builder 2.1.0.

REST Connect… and with checkpointing

Interestingly  this blog happens to address a problem I faced back on my very first project at Splunk. When I first started at Splunk as a Sales engineer, I  worked on  building a prototype of the ServiceNow Add-on. Writing Python, scripted inputs vs mod input, conf files, setup.xml, packaging, best practices, password encryption, checkpointing… the list goes on. It was tough dealing with all of these, to say the least. Was wondering why this can’t be much easier.

Fast forward to today, and an easy solution has finally arrived. You can now build all of the above with the latest version of Add-on Builder, all without writing any code or dealing with conf files. If you know your third-party API, you could be building the corresponding mod input in minutes.
One powerful addition to our new data input builder is checkpointing. In case you were wondering, checkpoints are for APIs what file pointers represent for file monitoring. Instead of polling all data from an API, checkpointing allows you to do it incrementally for new events only, at every poll. Checkpointing is a pretty complicated concept at times but very essential to active data polling. Luckily, I can say that this is no longer as complex as it used to.

For an example of doing this in Add-on Builder 2.1.0, check out Andrea Longdon’s awesome walkthrough using the New York Times API. This cool example will show you how to monitor and index NY Times articles-based user-defined key words.

Screen Shot 2017-02-20 at 10.17.35 PM
You will be able to define your app/add-on setup and automatically encrypt passwords using the storage password endpoint, in a drag and drop interface.

Screen Shot 2017-02-21 at 2.33.41 PM

 

CIM update at run-time

CIM mapping has the following major enhancements:

  • A new UI that makes it possible to compare fields from your third-party source and CIM model fields side by side.
  • You can also update CIM mapping objects even if they are built outside of Add-on Builder with no restart needed. In other words, can now update CIM mapping at run time in one single view from Add-on builder.

Screen Shot 2017-02-20 at 10.19.21 PM

What else is new?

  • The Add-on Builder has a new and enhanced setup library consistent with modern Splunk-built add-ons.
  • You can now import and export add-on projects, allowing you to work on an add-on on different computers and share projects with others. For details, see Import and export add-on projects.
  • One of my favorites: no more interruptions caused by having to restart Splunk Enterprise when building new data inputs, creating a new add-on, or any other step. Go through the end-to-end process, undisturbed.

Please check out our latest release. We would love to hear from you. Teaser alert, in the next blog post, I will share information about how to build SolarWind Add-on using Add-on Builder 2.1.0.

Happy Splunking!

 

from Splunk Blogs http://blogs.splunk.com/2017/02/21/from-api-to-easy-street/

Splunk Partner+ Program Announces 2017 Global Partner Awards

JourneySKOWith the convergence of Splunk’s new fiscal year, Global Partner Summit and our Global Partner+ Program Awards, there’s call for celebration as we look back on our previous year’s achievements and look towards FY’18.

Partners are vital to Splunk and continue to push the envelope and innovate every day. From creative sales techniques to innovative program execution, from app and technology development to delivering world-class services, Splunk Partners excel in their commitment to customers and the Splunk Partner+ Program while demonstrating an ability to strategically find and lead incremental business.

Throughout the past 12 months, Splunk Partners achieved remarkable success. The Partner+ team would like to recognize select partners who exemplified the core values Splunk Partner+ Program coupled with stellar performance. Our global Partner+ Award winners go above and beyond expectations to deliver outstanding customer successes.

Before we announce the FY’17 Partner Awards, I want to personally thank each and every Splunk Partner for your constant collaboration, commitment to excellence, and customer-first mentality. Splunk thrives when you succeed and your partnership drives Splunk to a higher standard.

Join me as I congratulate and virtually high-five our entire Global Partner ecosystem and this year’s Partner+ Award winners!

FY’2017 Splunk Partner+ Award Winners:

Global Winners

  • Global Reseller Partner of the Year: Red River
  • Global Distribution Partner of the Year: Carahsoft
  • Global Alliance Partner of the Year: Accenture
  • Global Partner Marketing Excellence: Kinney Group
  • Buttercup Award: OnX

Americas Winners

  • Americas Partner of the Year: CDW
  • Americas Distribution Partner of the Year: Arrow
  • Americas Rookie Partner of the Year: rSolutions
  • Americas Services Partner of the Year: Vivatas

Public Sector Winners

  • Public Sector Partner of the Year: Red River
  • Public Sector Distribution Partner of the Year: Carahsoft
  • Public Sector SLED Partner of the Year: CDW-G
  • Public Sector Services Partner of the Year: August Schell
  • Public Sector SI Partner of the Year: Deloitte

EMEA Winners

  • EMEA Partner of the Year: magellan netzwerke (Germany)
  • EMEA Distribution Partner of the Year: RRC (Russia)
  • EME Rookie Partner of the Year: Bechtle (Germany)
  • EMEA Services Partner of the Year: Riversafe Ltd. (UK)
  • EMEA North Partner of the Year: SMT (Netherlands)
  • EMEA Central Partner of the Year: magellan netzwerke (Germany)
  • EMEA South Partner of the Year: E & M Computing (Israel)
  • META Partner of the Year: Help AG (United Arab Emirates)

APAC Winners

We are energized and ready to kick off FY’18 and continue on our path to excellence, growth and new opportunities with you, our valued Partners. I can’t wait to see what’s next!

Thanks,
Jessica Walker McFarland
Director, Global Partner Marketing
Splunk

//platform.twitter.com/widgets.js

from Splunk Blogs http://blogs.splunk.com/2017/02/21/splunk-partner-program-announces-2017-global-partner-awards/

What Happens When You Move From Reactive to Proactive IT

1280px-Molina_Healthcare_LogoMost IT departments want to make an impact, but fire drills and troubleshooting usually get in the way. Often times, you find yourself playing the blame game. But what if you could get in front of an issue before an incident happens, rather than responding to it after the fact? What if you were no longer reactive to the situation, but instead could focus on aligning with business objectives?

Well, it’s not rocket science, but it hasn’t been easy to date! In this post, I’m here to share how enterprise organizations have been able to move past blame game and take the guesswork out of issue resolution. Let’s look at how one company has embraced the strategic opportunity of IT to align themselves with the business priorities.

In the past, when an IT issue happened at Molina Healthcare, it ended up being yet another crazy day in IT operations—calls, escalations, a bunch of different tools with their own consoles reporting their own interpretations to the data, and a very rudimentary process of elimination to put out the fire. Sound familiar? Molina needed to solve this and went through a tools rationalization exercise and concluded the current operational model needed a revamp. They turned their eyes to Splunk IT Service Intelligence.

Screen Shot 2017-02-20 at 11.11.59 AM

Using insights from Splunk ITSI, Molina gained visibility and correlation across its stack, which has reduced the number of IT incidents by 500% and MTTR by 150%. The Molina team relies on out-of-the-box Splunk ITSI dashboards focusing on the top 50 services that provide insight into infrastructure and application availability, performance and KPIs. Check out more details in this video:

http://player.ooyala.com/iframe.js#pbid=ade1a0fda2b44f91b972696d0aed07d8&ec=14Z2RqOTE63MdNoNIQwHiS-0xkmr_fI7

Like I mentioned earlier, this isn’t rocket science. You can help your IT team become more proactive by embracing the strategic opportunity of IT or get started on your own with the free Splunk ITSI sandbox.

from Splunk Blogs http://blogs.splunk.com/2017/02/21/molina-healthcare/

SSL Proxy: Splunk & NGINX

Who is this guide for?

It is a best practice to install Splunk as a non-root user or service account as part of a defense in depth strategy. This installation choice comes with the consequences of preventing the Splunk user from using privileged ports (Anything below 1024). Some of the solutions to this problem, found on Splunk Answers require iptables rules or other methods. In my experience, the iptables method is not that reliable, and many newer distributions of Linux are abandoning iptables in favor of firewalld as the default host firewall. In this guide, I will show you how to use Nginx, and Let’s Encrypt to secure your Splunk Search Head, while allowing ssl traffic on port 443.

splunk-logonginx-logole-logo

Prerequisites

• OS which supports the latest version of Nginx
• Linux OS required for Let’s Encrypt (If you choose to use that as your CA)
Root access to the search head

Configuration

The easiest way to get both products installed is to use yum or apt depending on your flavor of Linux.

Install Let’s Encrypt, Configure Splunk Web SSL

In a previous blog post, I provided a guide to generate SSL certs and configure Splunkweb to make use of them. You should follow that guide to generate your certs or your own organizational process for generating certificates before proceeding with the next steps.

Install Nginx

$ sudo apt install nginx

Configure Nginx to use SSL

Create a configuration for your site, it is best to use the hostname/domainname of the Splunk server. This file should be created in

/etc/nginx/sites-enabled
$ touch /etc/nginx/sites-enabled/splunk-es.anthonytellez.com

To configure Nginx for SSL, you only need three pieces of information:
• location of the certificate you plan to use
• location of the private key used to generate the certificate
• ssl port(s) to redirect

Example Configuration of splunk-es.anthonytellez.com:

server {
    listen 443 ssl;
    ssl on;
    ssl_certificate /opt/splunk/etc/auth/anthonytellez/fullchain.pem;
    ssl_certificate_key /opt/splunk/etc/auth/anthonytellez/privkey.pem;
    location / {
        proxy_pass https://127.0.0.1:8000;
    }
}

Reload Nginx:

 $ nginx -s reload

Optional: Redirect all http requests
To prevent users from seeing the default webpage served by Nginx, you should also redirect traffic over port 80 to port 443 to prevent leaking information about the version of Nginx running on your server.

server {
    listen 80;
    server_name splunk-es.anthonytellez.com;
    return 301 https://$host$request_uri;
}

Optional: Enable HSTS
HSTS is web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. Enabling this in Nginx can help to protect you if you are ever accessing your Splunk instance from an unprotected network. The included example is set with a max-age of 300 seconds, you can increase this to a larger time once you have validated the configuration is working.

server {
    listen 443 ssl;
    add_header Strict-Transport-Security "max-age=300; includeSubDomains" always;
    ssl on;
    ssl_certificate /opt/splunk/etc/auth/anthonytellez/fullchain.pem;
    ssl_certificate_key /opt/splunk/etc/auth/anthonytellez/privkey.pem;
    location / {
        proxy_pass https://127.0.0.1:8000;
    }
}

HSTS will force all browsers to query the https version of the site once they have processed this header. If you have issues validating if HSTS is working in your browser of choice, check out this resource on stack exchange: How can I see which sites have set the HSTS flag in my browser?

from Splunk Blogs http://blogs.splunk.com/2017/02/20/ssl-proxy-splunk-nginx/

Splunking Microsoft Azure Network Watcher Data

 

Microsoft has released a new service in Azure called Network Watcher.  Network Watcher is a network performance monitoring, diagnostic, and analytics service which enables you to monitor your network in Azure.  The data collected by Network Watcher is stored in one or more Azure Storage Containers.  The Splunk Add-on for Microsoft Cloud Services has inputs to collect data stored in Azure Storage Containers which provides valuable insights for operational intelligence regarding Azure network workloads.  In this blog post, we will explore how to get Azure Network Security Group (NSG) Flow Logs into Splunk and some possible use case scenarios for the data.

Getting Azure NSG Flow Log data into Splunk

NSG flow logs allow you to view information about ingress and egress IP traffic on their Network Security Groups. These flow logs show the following information:

  • Outbound and Inbound flows on a per Rule basis
  • Which NIC the flow applies to
  • Tuple information about the flow (Source/Destination IP, Source/Destination Port, Protocol)
  • Information about whether the traffic was allowed or denied

Getting Azure NSG Flow Log data into Splunk involves two basic steps:

 

Configuring NSG Flow Logs in the Azure Portal

From the Azure Portal, select Browse -> Network security groups

AzurePortalNSG

Select an existing security group and choose Settings -> Diagnostics to turn on data collection.

NSGEnable

 

Choose a storage account to send the logs and enable NetworkSecurityGroupFlowEvent

NSGStorage

 

 

Configuring the Splunk Add-on for Microsoft Cloud Services to ingest NSG Flow Logs

Download and install the Splunk Add-on for Microsoft Cloud Services in accordance with the documentation.

After installation of the add-on, connect the add-on to the Azure Storage Account specified above.

MSCSConfig

 

The NSG Flow Log data is kept in an Azure Storage blob container named insights-logsneh-networksecuritygroupflowevent.

Configure an Azure Storage Blog input for this container.

MSCSInput

 

Notice that the sourcetype is set to mscs:nsg:flow.  You do not have to set your sourcetype to this.  I just chose this as an easy way to differentiate the data.  Here is a handy props.conf configuration to break the JSON array into individual events:

[mscs:nsg:flow]
LINE_BREAKER = \}([\r\n]\s*,[\r\n]\s*)\{
SEDCMD-remove_header = s/\{\s*\"records\"\:\s*\[\s*//g
SEDCMD-remove_footer = s/\][\r\n]\s*\}.*//g
SHOULD_LINEMERGE = false
KV_MODE = json
TIME_PREFIX = time\":\"
REPORT-tuples = extract_tuple

Here is a handy transforms.conf delimiter for the tuples in the data:

[extract_tuple]
SOURCE_KEY = properties.flows{}.flows{}.flowTuples{}
DELIMS = ","
FIELDS = time,src_ip,dst_ip,src_port,dst_port,protocol,traffic_flow,traffic_result

Searching the NSG Flow Log Data with Splunk

Once the input from above is created, the NSG Flow Log data will be available to search in Splunk.  Some potential use cases for this data include:

Monitoring Protocols – this is a security and compliance use case.  Ensure only the correct protocols are in use and monitor the traffic usage of each protocol over time.

sourcetype=mscs:nsg:flow | top protocol by dst_ip

Monitoring Traffic Flow – this is useful to identify potential rouge communication.  For instance, if a source machine in your Azure environment exhibits destination traffic to a known bad address, this could indicate potential malware.

sourcetype=mscs:nsg:flow | stats count by src_ip dst_ip

This search could be visualized on a Sankey Diagram as well to visualize the flow.

Monitoring Allowed vs. Denied Traffic – this could indicate an attack or a misconfiguration.  If you are seeing a lot of denied traffic, this could indicate a misconfiguration of software that is trying to communicate with your Azure resources.

sourcetype=mscs:nsg:flow | stats count by traffic_result src_ip

Top Destination Addresses/Ports – this is useful for security and monitoring usefulness of services hosted in Azure

sourcetype=mscs:nsg:flow |top dst_port by dst_ip

Conclusion

Even though NSG Flow Logs are a new data source made available by Microsoft Azure, the Splunk Add-on for Microsoft Cloud Services is ready to ingest this data source today in order to give you an even greater degree of operational insight and intelligence for you Microsoft Azure environment.

from Splunk Blogs http://blogs.splunk.com/2017/02/20/splunking-microsoft-azure-network-watcher-data/

Splunk DB Connect 3 Released

Splunk DB Connect has just gotten a major upgrade! Let’s take a look at it.

What’s New

Splunk DB Connect 3.0 is a major release to one of the most popular Splunk add-ons. Splunk DB Connect enables powerful linkages between Splunk and the structured data world of SQL and JDBC. The major improvements of this release are:

  • Performance improvement. Under similar hardware conditions and environment, DB Connect V3 is 2 to 10 times faster than DB Connect V2, depending on the task.
  • Usability improvement. A new SQL Explorer interface assists with SQL and SPL report creation.
  • Improved support for scripted configuration, via reorganized configuration files and redesigned checkpointing system. Note that rising column checkpoints are no longer stored in configuration files.
  • Stored procedures support in dbxquery.
  • Retry policy on scheduled tasks is improved (no more need for auto_disable)

Backward Compatibility Changes

As part of this major release, we are making changes that will affect some users. The features that will have backward compatibility changes are:

  • Resource pooling is removed. If you are now using resource pooling, the configuration will be removed and all scheduled tasks will operate on the master node only. Resource pool nodes can be repurposed.
  • Scheduled tasks (inputs, outputs) are disabled on search head cluster. Scheduled tasks are disabled, but you can still perform output using dbxoutput command on search head cluster. If you are now using scheduled tasks on DB Connect V2, you need to move the configuration files from the cluster master to a heavy forwarder, then upgrade in-place to DB Connect 3.
  • Lookups redesigned. For performance and clarity reasons, automatic and scripted lookups have been replaced with a simpler, more performant dbxlookup command. If you are now using scripted lookups for their caching behavior, you can replicate this behavior and avoid search changes by creating a scheduled dbxquery task which outputs a lookup with the same name. If you are now using automatic lookups for live database access, you need to edit the searches to use the dbxlookup command instead of lookup.
  • dbxquery command options changed. The options output and wrap are deprecated and have no effect. The value for output and wrap is set to CSV and False by default. The value for shortnames is set to true by default.

Migration

DB Connect users should review documentation and test upgrade before moving DB Connect 3 into production. If you just upgrade the existing package in production, data will no longer flow. The version 3 package includes a migration script, see http://docs.splunk.com/Documentation/DBX/3.0.0/DeployDBX/MigratefromDBConnectv1 for documentation. Users of Spark SQL, Teradata, or Oracle databases may need to take additional manual steps to complete driver migration.

from Splunk Blogs http://blogs.splunk.com/2017/02/20/splunk-db-connect-3-released/