Information Builders Offers iWay Big Data Integrator on Microsoft Azure Marketplace Cloud

Information Builders, a leader in business intelligence (BI) and analytics, data integrity, and integration solutions, announced that it is offering its iWay Big Data Integrator (iBDI) product via the cloud on the Microsoft Azure Marketplace.

from insideBIGDATA http://insidebigdata.com/2017/04/30/information-builders-offers-iway-big-data-integrator-microsoft-azure-marketplace-cloud/

Thales: 63% of Enterprises Using Cloud, Big Data, IoT and Container Environments without Securing Sensitive Data

Thales, a leader in critical information systems, cybersecurity and data security, announced the results of its 2017 Thales Data Threat Report, Advanced Technology Edition, issued in conjunction with analyst firm 451 Research. According to the report, 93 percent of respondents will use sensitive data in an advanced technology (defined as cloud, SaaS, big data, IoT and container) environments this year.

from insideBIGDATA http://insidebigdata.com/2017/04/30/thales-63-enterprises-using-cloud-big-data-iot-container-environments-without-securing-sensitive-data/

Coding4Fun April 2017 Round-up

As I sit here Saturday morning, April 29th, writing next week’s posts, scheduling the first for Monday, May 1st, it dawns on me… I didn’t do an April Round-up! Doh!

Well that just won’t do; today’s out-of-band post will take of that! 🙂

Coding4Fun Blog

Cloning Your VS 2017 Packages
Small Basic is Now Available in the Windows Store
C64ForTheWin – C64 Development on your Windows Machine
Squish That Whitespace
One Browser Extension Tutorial to Rule…
Cortana goes IoT
Menees VS Tools Updated for Visual Studio 2017
ReactXP – A library for building cross-platform apps
"Happy Path” to IoT
Functioning Private Visual Studio Gallery via Azure Functions
dotnet CLI Tool Build Redux
Sock IoT with this Azure Connected System-on-a-chip Project

Coding4Fun Kinect Gallery

HoloOCR’ing
Kinecting to Art
HoloLens Terminator Vision
Kinect to HoloLens with Hololens-Kinect

Past Round-Ups

Coding4Fun January 2017 Round-up
Coding4Fun February 2017 Round-up
Coding4Fun First Quarter 2017 Round-Up

Coding4Fun 2014 Round-Up
Coding4Fun 2015 Round-Up
Coding4Fun 2016 Round-Up

 



from Channel 9 https://channel9.msdn.com/coding4fun/blog/Coding4Fun-April-2017-Round-up

Dataguise Introduces Sensitive Data Monitoring and Masking for Apache Hive

Dataguise, a leader in sensitive data governance, announced that DgSecure now provides industry-first sensitive data monitoring and masking in Apache Hive.

from insideBIGDATA http://insidebigdata.com/2017/04/29/dataguise-introduces-sensitive-data-monitoring-masking-apache-hive/

AI to Have Dramatic Impact on Business by 2020, According to Tata Consultancy Services Global Trend Study

Tata Consultancy Services (BSE: 532540, NSE: TCS), a leading global IT services, consulting and business solutions organization, unveiled its Global Trend Study titled, “Getting Smarter by the Day: How AI is Elevating the Performance of Global Companies.” Focused on the current and future impact of Artificial Intelligence (AI), the study polled 835 executives across 13 global industry sectors in four regions of the world, finding that 84% of companies see the use of AI as “essential” to competitiveness, with a further 50% seeing the technology as “transformative.”

from insideBIGDATA http://insidebigdata.com/2017/04/29/ai-dramatic-impact-business-2020-according-tata-consultancy-services-global-trend-study/

Qlik Sense 3.2: Extension Properties

After reading Michael’s wonderful post on the 3.2 features, https://community.qlik.com/blogs/qlikviewdesignblog/2017/04/04/introducing-qlik-sense-32?et=blogs.comment.created#commen…, I admit, I wanted to get into more details on each of the topics

from Jive Syndication Feed https://community.qlik.com/blogs/qlikviewdesignblog/2017/04/28/qlik-sense-32-extension-properties

Because it’s Friday: Powerpoint Punchcards

A “Turing Machine” — a conceptual data processing machine that processes instructions encoded on a linear tape — is capable of performing the complete range of computations of any modern computer (though not necessarily in a useful amount of time). Tom Wildenhain demonstrates the Turing-competeness of Powerpoint, where the “tape” is a series of punch-cards controlled by the animations feature:

Of the many things you can do with Powerpoint, but probably shouldn’t do, this ranks right up there.
That’s all from us for this week. See you back here on Monday, and have a great weekend!

from Revolutions http://blog.revolutionanalytics.com/2017/04/because-its-friday-powerpoint-punchcards.html

Using the Payments Request API

The Payment Request API helps customer and sellers complete the checkout process more seamlessly.  Whether you’re developing a website, a UWP app, or a bot, you can use the APIs to provide a faster and more consistent payment process.  Watch the video and then learn more:

 

Register on Microsoft Seller Center

Read a blog post for the Edge Payment Request API  or the UWP Payment Request API.

Finally, you can read the documentation at http://aka.ms/PaymentRequestAPI.

from Channel 9 https://channel9.msdn.com/Blogs/One-Dev-Minute/Using-the-Payments-Request-API

Making Decisions with Data – For Machine Learning Success, Follow the Lessons Learned in Embedded Analytics

Machine learning has been touted as a huge boon for businesses. Research by Infosys found that companies anticipate a 39 per cent boost to their revenues on average by 2020, from their investments in Artificial Intelligence and Machine Learning.

Machine learning success embedded analyticsThe prospects for machine learning involve adding smarter processes into applications that are used every day, in an effort to reduce the amount of human intervention required to help individuals make better and faster decisions. Bringing together data, analytics and automation, machine learning can help improve your chances of success and make everyone more productive. At least, that is the theory.

In the real world, it will take a while for these applications to be developed and deployed. However, both the will and the investment to make this happen are growing. According to a Forrester survey, there will be more than a 300 per cent increase in investment in cognitive computing in 2017, compared with 2016. Companies are investing in areas such as the Internet of Things to create more data that can then be analysed and consumed alongside data from their business applications.

For the business to see benefit from machine learning, IT teams have to focus on what existing employees can use data for, and how automation capabilities can wrap more value around that data over time. Currently, providing more insight around a customer segment is the first step for analytics. However, machine learning can be applied to that customer segment to see what purchases were made, what timescales were involved and what results might be expected. Using this learning activity, analytics can not only provide some insight, but also prescribe next steps for employees to take.

Lessons to learn

In the example of sales, this might involve pitching specific products or making certain offers that have a greater chance of success. In logistics and supply chain, it might involve structuring deals so that the customer gets deliveries quicker and the business can manage inventories and assets more efficiently over time. These incremental improvements can then be balanced against any wider digital transformation programmes in which the business can utilise machine learning over time.

Now, this market is still very new. While machine learning tools and frameworks exist, they have not been implemented widely, and the number of people with the right skills is low. However, there are other approaches that can help. For IT teams that want to take advantage of their data and machine learning together, lessons can be learned from embedded BI and analytics implementations.

Embedded BI refers to how companies can put analytics into their business and then provide either applications or services that are based on the results. These analytics tools can be used to differentiate a company’s products or services against competitors, and they can enable that company to make more money based on the data it holds. With embedded analytics, companies have to implement for customers quickly and with specific goals in mind.

How does embedding analytics work in practice?

Embedded analytics projects are different compared to more traditional BI or analytics implementations. Rather than looking at internal customers and their requirements, embedded analytics services are aimed at external customers that will normally have less knowledge of how to work with data. What they do want is a service that can help them quickly, provide more insight than they can get on their own, and that they can use without training.

For the team implementing a project like this, the fundamental aims should be to provide tools that can explain their approaches as they go and avoid introducing problems. Alongside this, the project has to solve real business pain points so that the service provides value from day one. For those looking at machine learning implementations, these same aims should be in place from the start.

Let’s look at an example – providing data and analytics tools on a data set such as travel and expenses. For HR staff, sorting through all the data to spot patterns is both a chore and a technical challenge. Providing the tools to automate this process is a good value-added service opportunity for the application provider. By building dashboards and analytic tools that can be shared within the application, the vendor can offer something that is more useful to the business team.

However, just making existing data look prettier is not the foundation for a successful long-term service offering. Instead, those analytics tools have to be rich enough that people can ask their own questions and see how the results are put together. This can then aid collaboration for the HR team with other departments that would be interested in the results.

The opportunity for machine learning takes this further. By looking at the data sets over time and what kinds of results are most valued, machine learning systems can be trained to look for patterns that should be useful to the customer and may be more difficult for an individual analyst to identify. These systems can then provide updates automatically when specific patterns are spotted.

This automation stage can extend embedded projects and make staff more efficient. However, one important element is how that data is shared. If the results are simple reports based on PDFs or spreadsheets, then the ability to share how the insight is generated is reduced. Embedding analytics into a product can make it easier to bring others directly into the data and the results, so they can interact with the data by themselves.

In some respects, this follows on from how we all learn mathematics at school. Even if we can get the right result, it’s just as important to show how you achieved the result so that others can follow your thought processes. For embedded analytics, showing how results were reached can be useful for all those who might need them. When machine learning is involved as well, the ability to see how a result is achieved with all its history and lineage is going to be more important than ever.

Designing for the business

Supporting the future growth of machine learning can take some lessons from embedded analytics projects. The most important of these is how to execute projects based on specific business value, rather than for potential insights that might come in the future.

As an example, many companies have made the move to adopt big data technologies such as Hadoop. Storing data at scale is far easier today compared to the past, so saving information “just in case” it might prove valuable in the future is an understandable reaction. However, this approach ignores how difficult it is to get value out of truly huge data sets that are not suitably structured or available back to the business.

Relying on data scientists to find insights buried in this morass of data is therefore not a guarantee that those insights will be found, let alone that they will justify the cost of technology or staff time. The same is also true of machine learning – just implementing new technologies so that the data can be analysed is not enough. The risk with this approach is that expensive staff time is dedicated to trying to create value from nothing, rather than looking at how to optimise value around current approaches.

Instead, it is worth looking at how the business works today and where automation can help improve productivity. This incremental gain can deliver a result more quickly and provide a greater return on investment.

At the same time, automation frees data scientists to work on bigger issues, and think of how data can be used to solve those issues. This may lead to complete changes in processes over time, but it does not get in the way of getting some quick wins through automation. The most important consideration is that this is not a question of “either/or” when it comes to approach around machine learning, but both.

Machine learning has a huge amount of potential. However, it has to be understood in context. It won’t solve all business problems, and it won’t suddenly turn poor ideas into good ones overnight. What it can do is enable faster and more accurate decision-making and help create more opportunities for success. Using the lessons of embedded analytics projects, companies can start planning how best to make use of machine learning in their own ways.

To learn more about the power of BI solutions and embedded analytics, download the Birst eBook, 7 Companies That Transformed Their Business With Analytics.

 

This article was originally published in IT Pro Portal on April 6, 2017.

The post Making Decisions with Data – For Machine Learning Success, Follow the Lessons Learned in Embedded Analytics appeared first on Birst.

from Blog – Birst https://www.birst.com/blog/making-decisions-data-machine-learning-success-follow-lessons-learned-embedded-analytics/

Make pleasingly parallel R code with rxExecBy

Some things are easy to convert from a long-running sequential process to a system where each part runs at the same time, thus reducing the required time overall. We often call these “embarrassingly parallel” problems, but given how easy it is to reduce the time it takes to execute them by converting them into a parallel process, “pleasingly parallel” may well be a more appropriate name.
Using the foreach package (available on CRAN) is one simple way of speeding up pleasingly parallel problems using R. A foreach loop is much like a regular for loop in R, and by default will run each iteration in sequence (again, just like a for loop). But by registering a parallel “backend” for foreach, you can run many (or maybe even all) iterations at the same time, using multiple processors on the same machine, or even multiple machines in the cloud.
For many applications, though, you need to provide a different chunk of data to each iteration to process. (For example, you may need to fit a statistical model within each country — each iteration will then only need the subset for one country.) You could just pass the entire data set into each iteration and subset it there, but that’s inefficient and may even be impractical when dealing with very large datasets sitting in a remote repository. A better idea would be to leave the data where it is, and run R within the data repository, in parallel.
Microsoft R 9.1 introduces a new function, rxExecBy, for exactly this purpose. When your data is sitting in SQL Server or Spark, you can specify a set of keys to partition the data by, and an R function (any R function, built-in or user-defined) to apply to the partitions. The data doesn’t actually move: R runs directly on the data platform. You can also run it on local data in various formats

The rxExecBy function is included in Microsoft R Client (available free) and Microsoft R Server. For some examples of using rxExecBy, take a look at the Microsoft R Blog post linked below.
Microsoft R Blog: Running Pleasingly Parallel workloads using rxExecBy on Spark, SQL, Local and Localpar compute contexts

from Revolutions http://blog.revolutionanalytics.com/2017/04/rxexecby.html