Since 2010, London has been home to our flagship office in EMEA, and we’ve made tremendous strides in growing that office. Our offices in this region are relentlessly focused on delivering exceptional outcomes for our customers like Spotify, Dixons Carphone, and Unilever, along with many others throughout EMEA. With a product that’s built on cutting […]
One interesting Qlik Sense extension that we have successfully used, is the BiPartite one. We have used it in couple of mashups like the UK Migration http://webapps.qlik.com/telegraph/uk-migration/index.html. I like this one since you have a
from Jive Syndication Feed https://community.qlik.com/blogs/qlikviewdesignblog/2017/03/31/bipartite-extension
The practice of data science requires skills that fall into three general areas: business acumen, computer technology/programming and statistics/math. Depending on whom you ask, the specific set of top skills varies. Dave Holtz describes the data science skills you need to get a job as a data scientist (8 Skills You Need to Be a Data Scientist). Ferris Jumah, examining LinkedIn profiles with the title “Data Scientists,” identified 10 skills (The Data Science Skills Network). BurtchWorks offers their list of skills that are critical to success in data science (9 Must-Have Skills You Need to Become a Data Scientist). RJMetrics, using LinkedIn data, identified the top 20 data science skill (The State of Data Science). For these lists, top skills reflected the frequency with which data professionals list these skills on their social media profile or simply reflect what the author thinks is a good set of skills.
If you’ve ever dreamed of being able to travel to the outer reaches of the galaxy and observe the double sunrise of a binary star over a distant planet, or see the twin ejection streams of a neutron star, or lazily graze the ice rings of a gas giant, now you can … almost. Elite Dangerous is a game for the PC and Xbox with all the things you’d expect from a space game: space stations, combat and trading, but the part I love the most is the exploration. The game simulates nothing less than all 400 billion star systems of the Milky Way, and lets you roam among them. All of the planets, stars, nebulae we know about are there (including the recently-discovered Trappist-1 system), but that’s a tiny fraction of the galaxy we know exists. The rest of the systems are procedurally generated under the cosmological principles as we generally understand them today, and you can visit every one (potentially at least). Exploration is a slow process — I’m currently towards the end of a month-long trek out beyond the Eagle Nebula — but it’s relaxing in a Zen way and the views along the way are gorgeous.
The simulation is startlingly realistic: I was astounded when it became apparent as that distant nebula got larger with each jump that the background isn’t a generic celestial sphere, but actually generated for each unique location you visit. If you see a star, you can head in that direction and see what planets are circling it (and even land on them). Each planet circles its star in real time, as does each moon about its planet, and each space station about its moon. But it’s actually hard to appreciate the depth of the simulation in human time, because you don’t typically hang around for a full day to see the rotation of a planet below. But Nicholas Breakspear used the game to make time-lapse movies of various scenes so you can see this galactic orrery in action:
If you’d like to see more, see Throttle Down 2. But for now that’s all from the blog for this week. Have a good weekend — I’m going to see if I can make it to Colonia before we’re back on Monday.
This week on Channel 9, Guests Hosts Seth Juarez and Kendra Havens discuss the week’s top developer news, including;
- [00:34] A Hitchhikers Guide to the CoreCLR Source Code [Matt Warren]
- [01:17] Windows 10 Creators Update coming April 11, Surface expands to more markets [Yusuf Mehdi]
- [01:38] Ignite registration is now open
- [02:14] #VS2017 – Thanks to Visual Studio Feedback, 20 days later, we got a new Release with some bug fixed! [Bruno Capuano]
- [02:51] TypeScript’s New Release Cadence [Daniel Rosenwasser]
- [03:55] Learning more about the Microsoft Data Science Virtual Machine 4th April 6pm–7pm [Lee Stott]
- [04:46] Visual Studio Uninstaller
- [05:51] Around The World With MVPs: Dozens Of Developers In Istanbul Go Deep Into Microsoft Technologies
Picks of the Week!
Please leave a comment or email us at firstname.lastname@example.org.
In this video, Patrick and I talk about what is on our minds. We take a look at some custom visuals within Power BI and also talk about the On-Premises Data Gateway.
What is Adam and Patrick Unplugged? [00:24]
Custom Visuals on Office Store – [06:22]
Play Axis Custom Visual – [10:43]
Gateway updates – [17:51]
At the recent Strata conference in San Jose, several members of the Microsoft Data Science team presented the tutorial Using R for Scalable Data Analytics: Single Machines to Spark Clusters. The materials are all available online, including the presentation slides and hands-on R scripts. You can follow along with the materials at home, using the Data Science Virtual Machine for Linux, which provides all the necessary components like Spark and Microsoft R Server. (If you don’t already have an Azure account, you can get $200 credit with the Azure free trial.)
The tutorial covers many different techniques for training predictive models at scale, and deploying the trained models as predictive engines within production environments. Among the technologies you’ll use are Microsoft R Server running on Spark, the SparkR package, the sparklyr package and H20 (via the rsparkling package). It also touches on some non-Spark methods, like the bigmemory and ff packages for R (and various other packages that make use of them), and using the foreach package for coarse-grained parallel computations. You’ll also learn how to create prediction engines from these trained models using the mrsdeploy package.
The tutorial also includes scripts for comparing the performance of these various techniques, both for training the predictive model:
and for generating predictions from the trained model:
(The above tests used 4 worker nodes and 1 edge node, all with with 16 cores and 112Gb of RAM.)
You can find the tutorial details, including slides and scripts, at the link below.
Strata + Hadoop World 2017, San Jose: Using R for scalable data analytics: From single machines to Hadoop Spark clusters
from Revolutions http://blog.revolutionanalytics.com/2017/03/tutorial-scaling-r.html