A True Hybrid Cloud Platform

Where are we today?   Since the launch of Qlik Sense Cloud in 2014 we have seen rapid growth in our cloud community, which now includes over 100,000 users worldwide.  We continue to expand and improve our offerings and today we offer multiple

from Jive Syndication Feed https://community.qlik.com/blogs/qlikproductinnovation/2017/05/26/a-true-hybrid-cloud-platform

Horizontal Bar Chart Extension

Last year I blogged about our  Mobile Friendly Horizontal Bar Chart that we use in most of our mashups in the Qlik Demo Team.   Since then, many things have changed. For a start, if you have a mashup that uses many objects, you will see the

from Jive Syndication Feed https://community.qlik.com/blogs/qlikviewdesignblog/2017/05/26/horizontal-bar-chart-extension

Tech Tip Thursday: Dynamic Power BI reports using Parameters

Did you know that you can dynamically filter data in Power BI using parameters that are stored in an Excel workbook? In this video, Patrick from Guy in a Cube shows us how, using M Functions within Power Query and a gateway to enable data refresh. Check it out!

from Category Name https://powerbi.microsoft.com/en-us/blog/tech-tip-thursday-dynamic-power-bi-reports-using-parameters/

Tech Tip Thursday: Dynamic Power BI reports using Parameters

Did you know that you can dynamically filter data in Power BI using parameters that are stored in an Excel workbook? In this video, Patrick from Guy in a Cube shows us how, using M Functions within Power Query and a gateway to enable data refresh. Check it out!

from Microsoft Power BI Blog | Microsoft Power BI https://powerbi.microsoft.com/en-us/blog/tech-tip-thursday-dynamic-power-bi-reports-using-parameters/

Deployment of Pre-Trained Models on Azure Container Services

This post is authored by Mathew Salvaris, Ilia Karmanov and Jaya Mathew.

Data scientists and engineers routinely encounter issues when moving their final functional software and code from their development environment (laptop, desktop) to a test environment, or from a staging environment to production. These difficulties primarily stem from differences between the underlying software environments and infrastructure, and they eventually end up costing businesses a lot of time and money, as data scientists and engineers work towards narrowing down these incompatibilities and either modify software or update environments to meet their needs.

Containers end up being a great solution in such scenarios, as the entire runtime environment (application, libraries, binaries and other configuration files) get bundled into a package to ensure smooth portability of software across different environments. Using containers can, therefore, improve the speed at which apps can be developed, tested, deployed and shared among users working in different environments. Docker is a leading software container platform for enabling developers, operators and enterprises to overcome their application portability issue.

The goal of Azure Container Services (ACS) is to provide a container hosting environment by using popular open-source tools and technologies. Like all software, deploying machine learning (ML) models can be tricky due to the plethora of libraries used and their dependencies. In this tutorial, we will demonstrate how to deploy a pre-trained deep learning model using ACS. ACS enables the user to configure, construct and manage a cluster of virtual machines preconfigured to run containerized applications. Once the cluster is setup, DC/OS is used for scheduling and orchestration. This is an ideal setup for any ML application since Docker containers facilitate ultimate flexibility in the libraries used, are scalable on demand, and all while ensuring that the application is performant.

The Docker image used in this tutorial contains a simple Flask web application with Nginx web server and uses Microsoft’s Cognitive Toolkit (CNTK) as the deep learning framework, with a pretrained ResNet 152 model. Our web application is a simple image classification service, where the user submits an image, and the application returns the class the image belongs to. This end-to-end tutorial is split into four sections, namely:

  • Create Docker image of our application (00_BuildImage.ipynb).
  • Test the application locally (01_TestLocally.ipynb).
  • Create an ACS cluster and deploy our web app (02_TestWebApp.ipynb).
  • Test our web app (03_TestWebApp.ipynb, 04_SpeedTestWebApp.ipynb).

Each section has an accompanying Jupyter notebook with step-by-step instructions on how to create, deploy and test the web application.

Create Docker Image of the Application (00_BuildImage.ipynb)

The Docker image in this tutorial contains three main elements, namely: the web application (web app), pretrained model, and the driver for executing our model, based on the requests made to the web application. The Docker image is based on an Ubuntu 16.04 image to which we added the necessary Python dependencies and installed CNTK (another option would be to test our application in an Ubuntu Data Science Virtual Machine from Azure portal). An important point to be aware of is that the Flask web app is run on port 5000, so we have created a proxy from port 88 to port 5000 using Nginx to expose port 88 in the container. Once the container is built, it is pushed to a public Docker hub account so that the ACS cluster can access it.

Test the Application Locally (01_TestLocally.ipynb)

Having short feedback loops while debugging is very important and ensures quick iterations. Docker images allow the user to do this as the user can run their application locally and check the functionality, before going through the entire process of deploying the app to ACS. This notebook outlines the process of spinning up the Docker container locally and configuring it properly. Once the container is up and running the user can send requests to be scored using the model and check the model performance.

Create an ACS Cluster and Deploy the Web App (02_DeployOnACS.ipynb)

In this notebook, the Azure CLI is used to create an ACS cluster with two nodes (this can also be done via the Azure portal). Each node is a D2 VM, which is quite small but sufficient for this tutorial. Once ACS is setup, to deploy the app, the user needs to create and SSH tunnel into the head node. This ensures that the user can send the JSON application schema to Marathon.

From the schema, we have mapped port 80 of the host to port 88 on the port (users can choose different ports as well). This tutorial only deploys one instance of the application (the user can scale this up, but it will not be discussed in here). Marathon has a web dashboard that can be accessed through the SSH tunnel by simply pointing the web browser to the tunnel created for deploying the application schema.

Test the Web App (03_TestWebApp.ipynb, 04_SpeedTestWebApp.ipynb)

Once the application has been successfully deployed the user can send scoring requests. The illustration below shows examples of some of the results returned from the application. The ResNet 152 model seems to be fairly accurate, even when parts of the subject (in the image) are occluded.


Further, the average response time for these requests is less than a second, which is very performant. Note that this tutorial was run on a virtual machine in the same region as the ACS. Response times across regions may be slower but the performance is still acceptable for a single container on a single VM.

After running the tutorial, to delete ACS and free up other associated Azure resources, run the cells at the end of 02_TestWebApp.ipynb notebook.

We hope you found this interesting – do share your thoughts or comments with us below.

Mathew, Ilia & Jaya

References:

from Cortana Intelligence and Machine Learning Blog https://blogs.technet.microsoft.com/machinelearning/2017/05/25/deployment-of-pre-trained-models-on-azure-container-services/

A Revamped Grand Prix at Inspire Europe 2017 – What You Need to Know

A-Revamped-Grand-Prix-at-Inspire-Europe-2017-What-You-Need-to-Know.png


 


Learn how this year’s Grand Prix at Inspire Europe 2017 will be more collaborative, and some important dates you should be aware of, should you decide to flex your Alteryx muscle and throw your hat in the ring.

from Analytics Blog articles https://community.alteryx.com/t5/Analytics-Blog/A-Revamped-Grand-Prix-at-Inspire-Europe-2017-What-You-Need-to/ba-p/60175

Leave Data Where It Is

The Big Data ‘hype’ may have died down at this point but for many of our customers big data is still a really big deal.  Today, companies have a wide range of tools at their disposal for managing and processing big data but one aspect of working with big data remains a concern – that is how to make big data accessible, relevant, and interactive to every business user.  Most big data systems are great for processing big data in batch jobs or for supporting the quantitative elite but are just too slow to query in real time and work with interactively.  It is true that in some cases, this pain can be reduced but at great financial cost making it difficult to deliver the full potential of your big data investments across the entire business. 


Qlik On-Demand App Generation to the rescue!

 

Over the past few years, Qlik has worked closely with some of our largest customers to develop techniques that provide an interactive user experience from Big Data so that every user can benefit from these investments.  And, the best part, this technique works just as well on Qlik Sense as it does on QlikVIew.

 

As a simple example, imagine a telco company that has data from every touch point between every cell-phone and every cell-tower.  (That’s big data!)  A customer calls the telco call-center asking for help with a connectivity issue on their phone that they experienced last Tuesday.  The phone rep doesn’t really need ALL of the data in the big data store to do that analysis but they do need to be able to work interactively with the data that is relevant to the caller in real time so that they can help them.

 

On-Demand App Generation (ODAG) provides the ability for a user to first select a subset of data that they are interested in from a Big Data lake and then generates a detailed app with the relevant data for the user to explore interactively. 

 

In our example, the phone rep might select the caller’s phone number and all of the cell towers within a wide radius around the area where the caller was traveling last Tuesday.  Qlik On-Demand App Generation will then spawn a customized instance of the analysis app with just the data that is needed to help this customer.  Since the customized version of the analysis app is now in-memory, Qlik is able to deliver a tailor made highly interactive experience.  Why is this important?  Because this allows the phone-rep to work with that customer in real-time solving their problem and improving customer service. 

 

We will share more specifics about how to work with On Demand App Generation in both Qlik Sense and QlikView in the future. Stay Tuned!

 

Qlik On-Demand App Generation was actually introduced last June after working with a number of large customers to develop the technique.  Over the past year we have worked to provide more a more integrated solution which is what you will be seeing this June.


What’s Cooking @ Qlik

This information and Qlik‘s strategy and possible future developments are subject to change and may be changed by Qlik at any time for any reason without notice. This information is provided without a warranty of any kind.  The information contained here may not be copied, distributed, or otherwise shared with any third party.

 

ODAG is an incredible tool to have in your Big Data toolbox but there is still room for improvement.  In the future, our goal is to take this a step further delivering the best of both worlds – a direct connection back to a ‘live’ big data store and a highly interactive user experience that delivers the Associative Experience.

 

With On-Demand App Generation, we solve the performance concerns of working with Big Data but in order to request a different ‘slice’ of the data, users need to move back to the selection app and start over.  And, of course, working on the entire data lake is not possible using this model.

 

At Qonnections recently we were able to get a preview of just how this is expected to work in the future.

 

In addition to continuing to offer the On-Demand App Generation approach to Big Data, Qlik is working toward a solution currently referred to as Associative Big Data Indexing.  Imagine a future with the full associative experience on top of a big data lake without moving the data.  This model involves a parallel array of indexing engines optimized for Qlik style associative queries and speed. 


WARNING: The following picture does not exist today.  It is illustrative of an idea and a potential future state that should not be taken as a commitment by Qlik.  This information is provided without a warranty of any kind.

bigdataindex.jpg


Data can remain located in the cloud, on premises, or even a combination.  And, the Associative Big Data Index can be reused across multiple apps so everyone across the organization can gain the benefits and insights in your big data investments.  We look forward to sharing more about On Demand App Generation and Associative Big Data Indexing in the future.

from Jive Syndication Feed https://community.qlik.com/blogs/qlikproductinnovation/2017/05/24/leave-data-where-it-is