VMworld recap: cloud shifts, Dell’s emergence and why on-premise data centers still matter

Network World’s Brandon Butler and IDC’s Matt Eastwood discuss major highlights from the VMWorld show in Las Vegas.

from Computerworld Cloud Computing http://www.computerworld.com/video/68955/vmworld-recap-cloud-shifts-dells-emergence-and-why-on-premise-data-centers-still-matter#tk.rss_cloudcomputing

Teradata Punches the Value Accelerator: Pushes Slow-Moving IoT Projects to the Fast Lane

teradata_logo_miTeradata Corp. (NYSE: TDC), the big data analytics company announced four powerful software-and-service solutions that speed up the transformation of Internet of Things (IoT) data to actionable insight.

from insideBIGDATA http://insidebigdata.com/2016/08/31/teradata-punches-the-value-accelerator-pushes-slow-moving-iot-projects-to-the-fast-lane/

Webinars for Sept 6-8: Power BI for Developers and What is new and exciting in Power BI

Next week, world class BI expert and presenter Peter Myers will be covering his favorite topic, BI Features for developers, in a webinar on September 6. On September 8, the product team will walking through “What is new and exciting in Power BI” in the August updates to Desktop, Mobile, and the service.

from Business Intelligence Blogs https://powerbi.microsoft.com/en-us/blog/webinars-for-sept-6-8-power-bi-for-developers-and-what-is-new-and-exciting-in-power-bi/

Webinars for Sept 6-8: Power BI for Developers and What is new and exciting in Power BI

Next week, world class BI expert and presenter Peter Myers will be covering his favorite topic, BI Features for developers, in a webinar on September 6. On September 8, the product team will walking through “What is new and exciting in Power BI” in the August updates to Desktop, Mobile, and the service.

from Microsoft Power BI Blog | Microsoft Power BI https://powerbi.microsoft.com/en-us/blog/webinars-for-sept-6-8-power-bi-for-developers-and-what-is-new-and-exciting-in-power-bi/

Adding a Deployment Server / Forwarder Management to a new or existing Splunk Cloud (or Splunk Enterprise) Deployment

As part of the Cloud Adoption team, I am working with Splunk Cloud (and Splunk Enterprise) customers on a daily basis and I get asked questions quite frequently about how to optimize, and effectively reduce, administration overhead. This becomes especially relevant when I am talking with new or relatively new customers that are expanding from a handful of forwarders, into the 100’s or 1000’s of forwarders. And I always say…. start with a Deployment Server.

For larger customers that have trained and experienced Splunk Administrators, or have engaged with Professional Services, this is a given and typically already exists in their deployments.

On the other end however, new Splunk Cloud and Splunk Enterprise customers may not have this luxury.

This article is for you.

I won’t go into full details on the how and why this works, but I will outline what configurations are needed and how this will scale based on my field experience, and what our best practices outlines. The configurations here are based upon Splunk’s Professional Services Base Configurations toolset.


This outlines how to configure a DS to deploy apps on your local network. From an architecture point of view, the Cloud Forwarder App contains the configs to send your data to your Splunk Cloud instance. This could be interchanged with an App that forwards to on-premise Indexers or an HF/UF Aggregation Tier, but that’s a different discussion…

Let’s get some terminology out of the way…

Deployment Server (DS) – A Splunk Enterprise instance that acts as a centralized configuration manager. It deploys configuration updates to other instances. Also refers to the overall configuration update facility comprising deployment server, clients, and apps.

Deployment Client – A remotely configured Splunk Enterprise instance. It receives updates from the deployment server. Typically these are Splunk Universal Forwarders or Heavy Forwarders.

Server Class – A deployment configuration category shared by a group of deployment clients. A deployment client can belong to multiple server classes.

Deployment App – A unit of content deployed to the members of one or more server classes.

So let’s dig in!

First off, we need a dedicated Splunk Heavy Forwarder (HF/HWF) instance that will be the DS. This instance should be configured and already sending its data to your Splunk Cloud instance, and this document assumes this is installed in /opt/splunk.

Here, a virtual machine is more than sufficient, and preferred. But follow the recommended spec for this : 4 cores x 8 gb of RAM and sufficient disk space to handle your deployment apps. (Typically 50gb is more than enough!) Additionally, while not required, a 64bit Linux host is ideal and you will get the most mileage out of this.

This server also needs to be placed on the network in such a way that all the hosts can communicate with it. This means that firewalls will need to be opened up for the Splunk Management Port to the DS host (TCP:8089 by default) or multiple DS’s deployed.

Additionally, we need our “Apps”.

In this article we will deploy the Splunk_TA_nix. 100_demostack_splunkcloud” from our Splunk Cloud Stack, and org_deployment_client. (More on this one later!)


These Apps need to all be placed in the /opt/splunk/etc/deployment-apps/ directory. Once these are place here, they will be visible in the Splunk Web Interface, from the Forwarder Management page.


From here, we are able to build our Server Classes. To do this, we want to consider our Deployment Topology. In a nutshell, a DS can filter based on hostname, IP address, or machine type. So we have a few options for deploying to all of our Clients.

Now we will setup our Server Classes..

First we setup a Server Class for All Clients. We are going to call this “All_Hosts”.


Once we create this, we can add Apps and Clients to the Server Class.


Let’s add our org_deployment_client and 100_demostack_splunkcloud Apps to the All_Hosts serverclass.


And next, we need to add Clients. At this point, there are no clients connecting to this DS. However, since this class is for all clients, we add a Include whitelist of ‘*’.


Next, repeat the creation of a serverclass, but with the Splunk_TA_nix add added. For filtering on this, until a client connects, you are not able to filter on machine types. This means you need to filter on machine name or IP address until the machine types connect. In this example, I created a filter for a host name of “nix-*, ubuntu*”.


Once this is done, your DS is ready and awaiting clients to connect!

Connecting Clients..

Previously I mentioned the “org_deployment_client” app. Let’s revisit this now.

Typically, to configure a client to connect to a DS, we either add it through the CLI (via splunk set deploy-poll servername.mydomain.com:8089) or we edit the deploymentclient.conf file in /opt/splunk/etc/system/local and restart..

That’s fine! It works… BUT.. it is local. Once you put it there, you have to manually change it (or if you’re lucky, automate it..) But I digress.

From the start, let’s make an app that connects to the DS.. Here’s where the “org_deployment_client” comes in to play.

Taken from the Splunk PS Base Configs, here is the template..

# Set the phoneHome at the end of the PS engagement
# 10 minutes
# phoneHomeIntervalInSecs = 600

# Change the targetUri
targetUri = deploymentserver.splunk.mycompany.com:8089

As you can guess, we update the targetUri to point to the address and management port of our DS. It’s highly recommended to use DNS for this, and not an IP address. And as of 6.3, this can also be a load balancer.. <finally…woot!! >

Now, the most difficult part.. The org_deployment_client app needs to be deployed to all our UFs on install, or after deployment.. This allows us the ability in the future to change the targetUri and phoneHomeInternvalInSecs without having to touch every forwarder! There are many ways to accomplish this, some use git/mercurial/cvs/ script the delivery of this, some build custom install packages that install this automatically.. Others manually deploy this after installation.. However you want to do it, do it!

Back on track.. once this is deployed, we install our clients (with the org_deployment_client.) In this case, I don’t have the apps configured to restart Splunk once they are downloaded from the DS, so a manual restart is required. Afterwards, we can check the Forwarder Management GUI and confirm our hosts and the apps deployed..


From here, we have our hosts sending their data logs to Splunk Cloud. This will include enabled TA’s and modular inputs.

There are “Gotchas”… Please Don’t do this!

Here are a few things to take into consideration, and not to do.

1) Search Head Cluster Members (SHC) – These cannot be part of a DS, the Deployer Node handles this functionality

2) Index Cluster Members – These cannot be part of a DS, the Cluster Master Node handles deployment of configurations

3) Using Automation ( Puppet / Chef / Ansible etc) – Be careful when using these in conjunction with DS.. configs can disappear and break…

4) Test your serverclasses.conf changes in a DEV environment!!

5) Standardize on a naming convention for your Server Classes and App names. Here I used org_deployment_client, but for your company it would be mycompany_deploymentclient_securelan and mycompany_deploymentclient_dmz1.

There are a lot of features and functionality available in the Deployment Server that I didn’t cover here. Our Education team does a wonderful job of teaching this, and Splunk PS can also spend a wonderful amount of time going over the different features of the DS and how to get it to scale. Please reach out if you want to learn more!

Additional Reading:
Capacity Planning Manual for Splunk Enterprise
Updating Splunk Enterprise Instances – Deployment server architecture
Updating Splunk Enterprise Instances – Plan a deployment
Updating Splunk Enterprise Instances – Configure deployment clients

Eric Six and Dennis Bourg

from Splunk Blogs http://blogs.splunk.com/2016/08/31/adding-a-deployment-server-forwarder-management-to-a-new-or-existing-splunk-cloud-or-splunk-enterprise-deployment/

Splunk documentation feedback: how it works and what makes a champion

On Monday this week, we at Splunk HQ had the pleasure of hosting Rich Mahlerwein, founding SplunkTrust member, cape-and-fez wearer, and Senior Systems Engineer at the Forest County Potawatomi IT Department. During his visit, I asked Rich to come and meet the documentation team.

Rich is legendary among the Splunk doc writers for the quality of the feedback he offers, and how often he sends it. So they were eager to meet him in person.

Here on the Splunk documentation team, our writers work hard to make sure our content is relevant, accurate, and matches the way our customers use Splunk software in the real world. An essential aspect of that is customer feedback. The Splunk doc team enjoys a constant, ongoing conversation with our customers. We email or talk with over 40 customers a week. I often describe that continual contact as “the fuel in our engine.” If you have ever submitted feedback on Splunk documentation, you know that we always follow up. Your comments and suggestions help make our documentation great and ensure that we are giving you the information you need to be successful and confident when you use Splunk software.

The feedback form at the bottom of every topic gives you two ways to contact us: Was this topic useful? and Post a comment. Most customers choose the default Was this topic useful? option, which emails the doc team directly. If you choose Post a comment, it will post a visible comment on the topic itself, and we will respond there.

The steady stream of doc feedback email provides our greatest opportunity to work directly with you on the issue you found. Here are two quick tips that will help us follow up:

  • Submit your feedback when you are logged in to splunk.com, or include your email address so we can reach you.
  • Use the YES or NO option to tell us if the topic was useful, and then tell us why. Be specific. If you take the extra few minutes to give us this information, we are in a much better position to make the improvements you want to see.

Here is an example of doc feedback that we can’t do much with, because we have no contact information and no sense of the issue the customer was struggling with.

User: 63.XX.XXX.XX
Result: NO
URL: http://docs.splunk.com/Documentation/Splunk/6.4.2/Admin/Specifyaproxyserver
Additional comments:

Let’s get back to Rich. What does his feedback look like? Here’s a recent example.

User: 199.XX.XX.XXX
Email: rich....
Result: YES
URL: http://docs.splunk.com/Documentation/Splunk/6.3.5/Data/MonitorWindowsprinterinformation
Additional comments: I feel the "Fields for Windows print monitoring data" should be more fleshed out.  I enabled an input for this for my printer guy, and we've noticed a few things that I think this document ought to explain.
For instance, for each job we get two events - "operation=set" and "operation=add".  I think perhaps the latter is it adding the job to the queue and the set is something else, perhaps the actual printing start itself?  I suspect this information is the same for nearly all recent versions of Windows, and there's only 18 fields I think, so perhaps listing those fields and what they mean could be useful.
I can send in a dozen events if you want, just won't do it here because the text box thingy I'm typing in eats formatting.  :(
Also up nearer the top it says
"The printer monitor input runs as a process called splunk-winprintmon.exe. This process runs once for every input you define, at the interval specified in the input. You can configure printer subsystem monitoring using Splunk Web or inputs.conf"
That's great, but is there any more information on exactly what it's grabbing?  From *where* does the winprintmon get its information?  It wouldn't be important except it's reporting slightly different stuff than windows does and knowing where its getting its info would help.
Oh, and one more thing?  Why not, perhaps as an additional possibility, set up the windows Event logs Microsoft-Windows-PrintService/Operational and pull off the Event ID 307?  Those seem to have a fairly compact and full set of information: all the "usual" windows event log fields like user (FILLED OUT FOR ONCE  YAY WINDOWS!) and computer, then text that looks very parseable: "Document 81, Print Document owned by colette.dehart on PRO-SCN-02 was printed on PRO-BLK-01 through port  Size in bytes: 126832. Pages printed: 1. No user action is required."  Just a thought.

A message like that enables us to contact the submitter immediately, have a meaningful conversation, and improve the docs right away.

So far this year, Rich has submitted 19 exemplary feedback emails like this one, plus an uncounted number of informal comments and suggestions through the splunk-usergroups Slack channel. There isn’t a month that goes by where we don’t hear from Rich two or three times, with specific, detailed, thorough suggestions and questions that enable the writers to focus their efforts on improving our content for all customers. In this way–as in so many others–Rich is a champion Splunk community member, making sure to apply what he knows about Splunk software so that it benefits everyone else. And when he submits feedback, his characteristic sense of humor is always on full display.

We marked the occasion of Rich’s visit to HQ by presenting him with a “splunk > docs feedback champion” trophy, in front of the entire Splunk doc team. Here’s a picture of Rich with his award.


And here he is with as many doc team members as we could cram in around him.


If you bump into Rich at .conf 2016 in Orlando, ask him about it!

from Splunk Blogs http://blogs.splunk.com/2016/08/31/splunk-documentation-feedback-how-it-works-and-what-makes-a-champion/

Qlik Sense 3.0 Service Release 2 is now available

Hi Qlik Sense users


We are pleased to announce that Qlik Sense 3.0 Service Release 2, build 3.0.2 is now available on our download site.   SR 2 contains several bug fixes, details of which can be found in the attached release notes.    Upgrade and installation instructions are also attached for your reference.


As always, when upgrading any software, be sure to carry out the necessary backups.   If you have any questions or need any help please contact support.



Global Support Team ! !

from Jive Syndication Feed https://community.qlik.com/blogs/supportupdates/2016/08/31/qlik-sense-30-service-release-2-is-now-available

Azure Platform Monitoring Overview

In this session we provide an overview of Azure monitoring and the various facilities it offers ranging from metrics, logs, alerts, autoscale and integration with other services. It is really informative and gives a sense for how almost all Azure services have a common base for monitoring. Let me know your thoughts!



from Channel 9 https://channel9.msdn.com/Blogs/Seth-Juarez/Azure-Platform-Monitoring-Overview