May 8, 2017 Health and Biotech analytics news roundup

HHS’s Price Affirms Commitment to Health Data Innovation: Secretary Price emphasized the need to decrease the burden on physicians.

Mayo Clinic uses analytics to optimize laboratory testing: The company Viewics makes software for the facility, which uses it to look for patterns and increase efficiency.

Nearly 10,000 Global Problem Solvers Yield Winning Formulas to Improve Detection of Lung Cancer in Third Annual Data Science Bowl: The winners of the competition, which challenged contestants to accurately diagnose lung scans, were announced.

Gene sequencing at Yale finding personalized root of disease; new center opens in West Haven: The Center for Genomic Analysis at Yale opened and is intended to help diagnose patients.

Source by pstein

10 Things to Know About the SUPR-Q

10 Things to Know About the SUPR-Q

10 Things to Know About the SUPR-QThe SUPR-Q (Standardized User Experience Percentile Rank Questionnaire) is a standardized questionnaire that measures the quality of the website user experience.

It’s an 8-item instrument that’s gone through multiple rounds of psychometric validation and is used by hundreds of organizations around the world. Here’s a list of 10 essential things to know about the SUPR-Q.

1. It’s derived from research and refined across studies.

Instead of starting from scratch and making up items, a more effective way to build a questionnaire is to start with similar items described in overlapping ways. For the SUPR-Q, this involved combing the UX and market research literature to find items from other published reports that addressed similar or complementary aspects of website UX quality. The SUPR-Q was derived from analyzing the format and items in 17 existing questionnaires and identifying 33 candidate items to test. These items were then refined or removed over multiple studies based on how well they performed across several statistical tests.

Data from over 4,000 responses, across three studies, and over 100 website experiences provided the necessary large dataset to refine the SUPR-Q to be the compact, reliable, and valid questionnaire it is now. We continue to examine new items and also validate translated versions.

2. It’s reliable.

Reliability is how consistent people respond to items. The SUPR-Q’s overall measure of UX quality shows high internal-consistency reliability (Cronbach- α = .86). Its subfactors have lower but still acceptable reliability (α = .64 to α= .88). Lower reliability is a natural consequence of having fewer similar items, so it’s expected to have lower reliability scores with fewer items (8 vs 2)—but it’s a small price to pay. The lowest scoring factor is the loyalty factor and its lower alpha is also driven by the different number of scale points in the 11-point likelihood to recommend item.

3. It’s valid.

Validity is the capability of a questionnaire to measure what it’s intended to. There are a number of ways to measure validity and the SUPR-Q excels in multiple tests of validity. It has high content validity (items cover the construct of User Experience based on expert judgment), high convergent validity (the SUPR-Q correlates with the SUS and other questionnaires that measure similar constructs), and discriminate validity (it differentiates excellent and poor websites as good as or better than other questionnaires). By the way, I used the website webpagesthatsuck.com to identify poor performing websites for the analysis—a list you definitely don’t want your site to be on.

4. It measures four sub-constructs of UX.

In addition to a global measure of UX, the SUPR-Q provides measures of usability, appearance, trust/credibility, and loyalty. Part of the process of questionnaire construction is to examine the number of dimensions a questionnaire has. Across the three studies, the use of a factor analysis revealed these four factors.

5. 50 is average.

To make the score as intuitive as possible, SUPR-Q scores are percentile ranks. Percentile ranks make raw data easier to interpret. It’s what pediatricians use to describe the weight and height of infants and toddlers because it’s hard to know if 25 inches is tall or short (especially to sleep deprived parents with crying kids). With a percentile rank, a 50 means 50thpercentile—which is by definition the average. A SUPR-Q score of 35 is at the 35th percentile—below average.

6. It’s backed by a normalized database.

In addition to having a reliable and valid measure of the website user experience, the other advantage to a standardized questionnaire like the SUPR-Q is a normalized (also called norm-referenced) database to compare scores to. The SUPR-Q database contains a rolling list of around 150 websites that we update partially each quarter. This makes even your first measure with the SUPR-Q more meaningful as you can know whether scores are good (above 75%), bad (below 25%), or average (around 50%). Because we collect and maintain the data (we don’t use client data), you can also compare your score to some of the best-known websites (for example, Amazon, YouTube, Netflix, and Target.com). Maintaining regular updates means the SUPR-Q database isn’t free—as the SUS is—but the timely and relevant benchmarks we believe justify the cost.

7. The database is updated quarterly.

Each quarter we collect data from new websites and use that data to update the SUPR-Q database. We’ll often provide a separate report (for example, for hotels, social media, and retail) that provides more detail on the leaders and laggards in an industry. Interestingly, while individual website scores fluctuate (some do more than others after design changes), the overall average scores across the subfactors don’t change much each quarter (usually by only .1 of a point). This suggests the 150 websites in the database provide a reasonably stable measure of website UX quality to benchmark against.

8. It includes NPS computation.

Whether or not you like the Net Promoter Score, many organizations rely on it (or are told to rely on it). For that reason, the 11-point likelihood to recommend item is included as part of the SUPR-Q. This means you not only get a measure of loyalty, but also the NPS for 150 websites. The average NPS is around –7% (a bit more detractors than promoters), by the way.

9. It predicts the SUS.

A decade ago we started benchmarking websites for the SUS, but we know the website user experience is more than just usability. The 2-item usability factor on the SUPR-Q can predict SUS scores quite accurately because they’re highly correlated (r=.87). We wanted to retain as much continuity to existing SUS data so when we created the SUPR-Q we ensured the usability factor correlated highly and the SUPR-Q provides that.

10. It’s not meant to diagnose problems.

The SUPR-Q, similar to most standardized questionnaires, provides a broad measure of the experience, but it’s not specific enough to tell you what to fix on a website. For that, you need to conduct a usability test or expert review. This is what our industry reports provide. As part of its validation process, the SUPR-Q differentiates between websites with poor and superior user experiences. What’s more, we’ve found that SUPR-Q scores correlate well (r = .5) with a detailed guideline review of a website.

 

Source by analyticsweek

Best Practices for Using Context Variables with Talend – Part 2

First off, a big thank you to all those who have read the first part of this blog series!  If you haven’t read it, I invite you to read it now before continuing, as ~Part 2 will build upon it and dive a bit deeper.  Ready to get started? Let’s kick things off by discussing the implicit context load.

The Implicit Context Load

The Implicit Context Load is one of those pieces of functionality that can very easily be ignored but is incredibly valuable.

Simply put, the implicit context load is just a way of linking your jobs to a hardcoded file path or database connection to retrieve your context variables. That’s great, but you still have to hardcode your file path/connection settings, so how is it of any use here if we want a truly environment agnostic configuration?

Well, what is not shouted about as much as it probably should be is that the Implicit Context Load configuration variables can not only be hardcoded, but they can be populated by Talend Routine methods. This opens up a whole new world of environment agnostic functionality and makes Contexts completely redundant for configuring Context variables per environment.

You can find the Talend documentation for the Implicit Context Load here. You will notice that it doesn’t say (at the moment…maybe an amendment is due :)) that each of the fields shown in the screenshot below can be populated by Talend routine methods instead of being hardcoded.

JASYPT

Before I go any further it makes sense to jump onto a slight tangent and mention JASYPT. JASYPT is a java library which allows developers to add basic encryption capabilities to his/her projects with minimum effort, and without the need of having deep knowledge on how cryptography works. JASYPT is supplied with Talend, so there is no need to hunt around and download all sorts of Jars to use here. All you need to be able to do is write a little Java to enable you to obfuscate your values to prevent others from being able to read them in clear text.

Now, you won’t necessarily want all of your values to be obfuscated. This might actually be a bit of a pain. However, JASYPT makes this easy as well. JASYPT comes built-in with some functionality which will allow it to ingest a file of parameters and decrypt only the values which are surrounded by ….

ENC(………)

This means a file with values such as below (example SQL server connection settings)…..

TalendContextAdditionalParams=instance=TALEND_DEV

TalendContextDbName=context_db

TalendContextEnvironment=DEV

TalendContextHost=MyDBHost

TalendContextPassword=ENC(4mW0zXPwFQJu/S6zJw7MIJtHPnZCMAZB)

TalendContextPort=1433

TalendContextUser=TalendUser

…..will only have the “TalendContextPassword” variable decrypted, the rest will be left as they are.

This piece of functionality is really useful in a lot of ways and often gets overlooked by people looking to hide values which need to be made easily available to Talend Jobs. I will demonstrate precisely how to make use of this functionality later, but first I’ll show you how simple using JASYPT is if you simply want to encrypt and decrypt a String.

Simple Encrypt/Decrypt Talend Job

In the example I will give you in part 3 of this blog series (I have to have something to keep you coming back), the code will be a little harder than below. Below is an example job showing how simple it is to use the JASYPT functionality. This job could be used for encrypting whatever values you may wish to encrypt manually. It’s layout is shown below….

 

Two components. A tLibraryLoad to load the JASYPT Jar and a tJava to carry out the encryption/decryption.

The tLibraryLoad is configured as below. Your included version of JASYPT may differ from the one I have used. Use whichever comes with your Talend version.

The tJava needs to import the relevant class we are using from the JASYPT Jar. This import is shown below…..

The actual code is….

import org.jasypt.encryption.pbe.StandardPBEStringEncryptor;

Now to make use of the StandardPBEStringEncryptor I used the following configuration….

The actual code (so you can copy it) is shown below….

//Configure encryptor class

StandardPBEStringEncryptor encryptor = new StandardPBEStringEncryptor();

encryptor.setAlgorithm("PBEWithMD5AndDES");

encryptor.setPassword("BOB");




//Set the String to encrypt and print it

String stringToEncrypt = "Hello World";

System.out.println(stringToEncrypt);




//Encrypt the String and store it as the cipher String. Then print it

String cipher = encryptor.encrypt(stringToEncrypt);

System.out.println(cipher);




//Decrypt the String just encrypted and print it out

System.out.println(encryptor.decrypt(cipher));

In the above it is all hardcoded. I am encrypting the String “Hello World” using the password “BOB” and the algorithm “PBEWithMD5AndDES”. When I run the job, I get the following output….

Starting job TestEcryption at 07:47 19/03/2018.




[statistics] connecting to socket on port 3711

[statistics] connected

Hello World

73bH30rffMwflGM800S2UO/fieHNMVdB

Hello World

[statistics] disconnected

Job TestEcryption ended at 07:47 19/03/2018. [exit code=0]

These snippets of information are useful, but how do you knit them together to provide an environment agnostic Context framework to base your jobs on? I’ll dive into that in Part 3 of my best practices blog. Until next week!

The post Best Practices for Using Context Variables with Talend – Part 2 appeared first on Talend Real-Time Open Source Data Integration Software.

Source by analyticsweekpick

The Updated Piwik PRO Marketing Suite 6.2.0 is here!

With the beginning of the month we’re glad to announce that Piwik PRO Marketing Suite has been upgraded to version 6.2.0. The official release to our customers was on July 31 this year. The software update brings various new capabilities along with performance improvements, which is the result of numerous meetings, discussions, and significant input from both our customers and staff.

In this post, we’ll give you a run down of all the major changes and fixes so you don’t miss a beat. So, here we go.

What can you expect from the refreshed Tag Manager?

With the latest update to Tag Manager, our product has expanded its library of DoubleClick tag templates with Floodlight Counter and Floodlight Sales to let you more efficiently track conversion activities. The first one enables you to count how many times your users visited your website after they either clicked on or saw one of your ads. Thanks to the second one, you can record how many items users have bought and the value of the whole purchase.

What’s more, our team fixed issues concerning variables and even expanded their functionality. Currently, can now employ refactor variables, covering various types of variables like string, integer, boolean, and objects — depending on usage context.

Next, we made some changes regarding cookies. Namely, the cookie’s default expiry date has been reduced to one year, and that can’t be changed by the user.

What can you expect from the refreshed Consent Manager?

The recent update to Piwik PRO has introduced several new functionalities to Consent Manager. First of all, you can now manage consents with a JavaScript API that enables you:

  • get consent types
  • get new consent types
  • get consent settings
  • send data subject request

All in an easier and more convenient way.

Then, you can get a better view into visitors’ consent decisions with newly included Consent Manager reports. In this way you can see, for instance, if a user viewed the consent form, provided consent or just left the page without submitting any consent decision.

A view of one of the consent manager reports.

Furthermore, we added a new functionality so users with Edit & Publish authorization have the ability to easily manage all consents.

Consent Manager’s visual editor has been upgraded with an HTML elements tree for a better user experience. It enables you with an easy and convenient method to track and visualize changes in your consent. Moreover, with the product update you can easily see the history of all modifications to the copy in the consent form.

Lastly, you’ll be able to ask your visitors for consent again 6 months after their first consent decision was recorded. This can be used to encourage users to provide consent if they didn’t do so the first time, or if they changed their consent decisions at some point in time.

What can you expect from the refreshed Audience Manager?

Another product in our stack that also got a makeover is Audience Manager (Customer Data Platform). One of the most significant features was the addition of two API endpoints. You can now pull lists of created audiences and easily export all profiles into a CSV file from Audience Manager via API. This is particularly useful for automating data transfers from Audience Manager to your other marketing tools, such as your marketing automation platform.

What can you expect from the refreshed Analytics?

Last but not least, our flagship product — Analytics — has got a significant enhancement with row evolution reports for funnel steps. It’s a big asset as you can now take a closer look at each funnel step individually on each row of your report. This will come in handy as you can view how metrics change throughout time, for instance, due to modifications to the site or an increase in traffic. What’s more, you can apply annotations to charts on a particular date to mark the exact moment when a change occurs.

A view of row evolution report for each step of the funnel.

To round out

As you can see, our team has introduced a host of improvements with the new update. Some include major changes, while other are small upgrades and with various fixes. We are constantly working on our products so they’ll run smoothly and help you address all your analytics issues on the spot. Naturally, we’ll be releasing more advancements, tweaks, and new features again soon, so stay tuned! If you have any questions or suggestions, we’re here for you so…

Contact us

The post The Updated Piwik PRO Marketing Suite 6.2.0 is here! appeared first on Piwik PRO.

Source: The Updated Piwik PRO Marketing Suite 6.2.0 is here! by analyticsweek

Compelling Use Cases for Creating an Intelligent Internet of Things

The industrial sector of the Internet of Things, the Industrial Internet, is far more advanced in terms of adoption rates, technological applications, and business value generation than the consumer side of the IoT is. Perhaps the most pronounced advantage between these two sectors is in the utilization of machine learning, which is routinely deployed in the Industrial Internet for equipment asset monitoring and predictive maintenance.

Although the use cases for this aspect of Artificial Intelligence differ on the consumer side, they still provide the same core functionality with which they’ve improved the industrial sector for years—identifying patterns and presaging action to benefit the enterprise.

On the industrial side, those benefits involve sustaining productivity, decreasing costs, and increasing efficiency. On the consumer side, the prudent deployment of machine learning on the IoT’s immense, continuously generated datasets results in competitive advantage and increased revenues.

“The thing about the IoT in general is that the amount of data from these things is enormous,” Looker Chief Data Evangelist Daniel Mintz noted. “So when you’re selling thousands, tens of thousands or hundreds of thousands of devices, that’s a lot of data that gives you a lot of guidance for these devices on what’s working, what’s not working, and how to [best] tune the system.”

Aggregation Analytics
There are multiple ways in which the IoT is primed for machine learning deployments to produce insights that would otherwise go unnoticed due to the scale of data involved. The large datasets are ideal for machine learning or deep learning’s need for copious, labeled training data. Moreover, IoT data stem from the field, offering an unparalleled view into how surroundings are impacting functionality—which is critical on the consumer side. “If you’re trying to understand what the relationship is between failure rates or misbehavior and the environment, you’re going to absolutely be using machine learning to uncover those linkages,” Mintz said. The basic precept for which machine learning is deployed is for user behavior or aggregate analytics in which massive quantities of data from IoT devices are aggregated and analyzed for patterns about performance. “At this scale, it’s not very easy to do any other way,” Mintz observed.

However, there’s a capital difference in the environments pertaining to the industrial versus the consumer side of the IoT, which makes aggregate analytics particularly useful for the latter. “Industrial applications are more controlled and more uniform,” Mintz said. “With the consumer side, you’re selling devices that are going out into the real world which is a much less controlled environment, and trying to figure out what’s happening out there. Your devices are undoubtedly encountering situations you couldn’t have predicted. So you can find out when they’re performing well, when they’re not, and make changes to your system based on that information.”

User Behavior
That information also provides the means for product adjustments and opportunities to increase profits, simply by understanding what users are doing with current product iterations. It’s important to realize that the notion of user behavior analytics is not new to the IoT. “The thing that’s new is that the items that are producing that feedback are physical items in the real world [with the IoT] rather than websites or mobile apps,” Mintz commented. “System monitoring and collating huge amounts of data to understand how people are using your product is a relatively old idea that people who run websites have been doing for decades.” When machine learning is injected into this process, organizations can determine a number of insights related to feature development, marketing, and other means of capitalizing on user behavior.

“You might be using machine learning to understand who is likely to buy another device because they’re really using the device a lot,” Mintz said. “Or, you might use machine learning to improve the devices.” For example, organizations could see which features are used most and find ways to make them better, or see which features are rarely used and improve them so they provide a greater user experience. The possibilities are illimitable and based on the particular device, the data generated, and the ingenuity of the research and development team involved.

Integrating Business Semantics
Because IoT data is produced in real-time via streaming or sensor data technologies, those attempting to leverage it inherently encounter integration issues when applying it to other data sources for a collective view of its meaning. When properly architected, the use of semantic technologies—standard data models, taxonomies and vocabularies—can “allow business analysts to take all of their knowledge of what data means to the business, how it’s structured and what it means, and get that knowledge out of their heads and into [the semantic] software,” Mintz mentioned. When dealing with low-latent data at the IoT’s scale, such business understanding is critical for incorporating that data alongside that of other sources, and even expanding the base of users of such data. “The reality is raw data, particularly raw data coming off these IoT devices, is really [daunting],” Mintz said. “It’s just a stream of sort of logs that’s really not going to help anybody do anything. But if you start to collect that data and turn it into something that makes sense to the business, now you’re talking about something very different.”

Anomaly Detection
The most immediate way AI is able to enhance the Internet of Things is via the deployment of machine learning to identify specific behaviors, and offer ways to make system, product or even enterprise improvements based on them. Perhaps the most eminent of these is machine learning’s capacity for anomaly detection, which delivers numerous advantages in real-time IoT systems. “That’s a huge use case,” Mintz acknowledged. “It’s not the only one by any means, but I do think it’s a huge use case. That comes straight out of the manufacturing world where you’re talking about predictive maintenance and preventative maintenance. What they’re looking for is anomalous behavior that indicates that something is going wrong.” When one considers the other use cases for intelligent IoT applications associated with performance, environments, product development, and monetization opportunities, they’re an ideal fit for machine learning.

Originally Posted at: Compelling Use Cases for Creating an Intelligent Internet of Things by jelaniharper

Benefits of IoT for Hospitals and Healthcare

Undoubtedly, the Internet of Things technology has been significantly transforming the healthcare industry by revamping the way devices, apps, and users connect and interact with each other for delivering healthcare services. IoT is continuously introducing innovative tools as well as capabilities like IoT enabled medical app development that build up an integrated healthcare system with the vision of assuring better patients care at reduced costs.

Consequently, it is an accumulation of numerous opportunities that hospitals and wellness promoters can consider while they optimize resources with automated workflows. For example, a mass of hospitals utilizes IoT for controlling humidity and managing assets and temperature within operating areas. Moreover, IoT applications are offering enormous perks to health care providers and patients considerably improving health care services.

The Impact of IoT on Healthcare Industry

Check Out Some The Best IoT Applications That Are Impacting Healthcare Services:

Real-Time Remote Monitoring

IoT enables connecting multiple monitoring devices and thus monitoring patients in real time. Further, these connected devices can send out signals from home also, thereby decreasing the time required for patient care in the hospitals.

Blood Pressure Monitoring

A sensor based intelligent system like Bluetooth enabled coagulation system can be utilized to monitor blood pressure levels of patients who undergo hypertension. Such monitoring devices also help to diminish the possibility of cardiac arrests in critical cases.

Smart Pills

Some of the pharmacy companies like Proteus Digital Health, WuXi PharmaTech, and TruTag have been making edible IoT, “smart” pills that help to monitor health issues, medication controls, and adherence. Such Smart pills will aid drug creation organizations to lower their risks.

Smart Watches

IoT enabled wearable devices as Apple Watch can effectively monitor and evaluate people’s mood and report information to the server. Moreover, some of the apps are being built to monitor fitness activities and sleep cycles.

Let’s Have a Look at Key Benefits of IoT in Healthcare Industry

Reduced Cost

By leveraging the connectivity of the healthcare solutions, healthcare providers can improve patient monitoring in real time basis and thereby noticeably diminish needless visits by doctors. Specifically, advanced home care services are reducing re-admissions and on hospital stays.

Better Results of Treatment

The connected healthcare solutions with cloud computing or other virtual infrastructure enable care providers to obtain real-time information that aids them to make knowledgeable decisions and provide evidence-based treatments. This makes sure of timely healthcare provision and improved treatment results.

Enhanced Disease Management

This is one of the best benefits of IoT in the healthcare sector. IoT empowers healthcare providers to monitor patients and access real-time data continuously. This helps to treat diseases earlier than some serious condition.

Reduced Faults

Precise data collection and automated workflows along with data-driven decisions greatly help to reduce waste, system costs and most notably diminishing errors.

Enhanced Patient Experience

The Internet of Things mainly focuses on the patient’s needs. This results in better accurateness of diagnosis, proactive treatments, timely intervention by doctors and improved treatment results giving rise to better patient trust and experience.

Improved Drugs Management

Making and management of drugs is a main expenditure in the healthcare sector. Here as well IoT performs a huge role. With IoT devices and processes it is possible to manage these costs better.

Conclusion

IoT Enabled Solutions like IoT enable medical app development, and connected healthcare solutions are proving to be a game changer in the healthcare industry. With its enormous applications, IoT has been facilitating healthcare providers including doctors, hospitals, and clinics to nurture the patients with accurate treatment services and strategies.

Integrating IoT solutions in health care services is going to be essential to match with the increasing needs of the digital world. If you are willing to digitize your healthcare services, then IoT should be your first choice. Contact us to know more about different IoT solutions and applications.

 

Source : (https://goo.gl/q5KaBG)

Source: Benefits of IoT for Hospitals and Healthcare by analyticsweek

Mar 07, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Pacman  Source

[ AnalyticsWeek BYTES]

>> How Airbnb Uses Big Data And Machine Learning To Guide Hosts To The Perfect Price by analyticsweekpick

>> Underpinning Enterprise Data Governance with Machine Intelligence by jelaniharper

>> 2016 Trends for the Internet of Things: Expanding Expedient Analytics Beyond the Industrial Internet by jelaniharper

Wanna write? Click Here

[ NEWS BYTES]

>>
 Global Risk Analytics Market – Current & Future trends, Growth Opportunities, Industry analysis & forecast by 2025 – TechnoBust Under  Risk Analytics

>>
 Prescriptive Analytics Market- The Growing Prominence Of Big Data – CMFE News (press release) (blog) Under  Prescriptive Analytics

>>
 Prescriptive Analytics Market- The Growing Prominence Of Big Data – CMFE News (press release) (blog) Under  Prescriptive Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Lean Analytics Workshop – Alistair Croll and Ben Yoskovitz

image

Use data to build a better startup faster in partnership with Geckoboard… more

[ FEATURED READ]

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World

image

In the world’s top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Mast… more

[ TIPS & TRICKS OF THE WEEK]

Data Analytics Success Starts with Empowerment
Being Data Driven is not as much of a tech challenge as it is an adoption challenge. Adoption has it’s root in cultural DNA of any organization. Great data driven organizations rungs the data driven culture into the corporate DNA. A culture of connection, interactions, sharing and collaboration is what it takes to be data driven. Its about being empowered more than its about being educated.

[ DATA SCIENCE Q&A]

Q:Examples of NoSQL architecture?
A: * Key-value: in a key-value NoSQL database, all of the data within consists of an indexed key and a value. Cassandra, DynamoDB
* Column-based: designed for storing data tables as sections of columns of data rather than as rows of data. HBase, SAP HANA
* Document Database: map a key to some document that contains structured information. The key is used to retrieve the document. MongoDB, CouchDB
* Graph Database: designed for data whose relations are well-represented as a graph and has elements which are interconnected, with an undetermined number of relations between them. Polyglot Neo4J

Source

[ VIDEO OF THE WEEK]

#FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency

 #FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

In God we trust. All others must bring data. – W. Edwards Deming

[ PODCAST OF THE WEEK]

Unconference Panel Discussion: #Workforce #Analytics Leadership Panel

 Unconference Panel Discussion: #Workforce #Analytics Leadership Panel

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

100 terabytes of data uploaded daily to Facebook.

Sourced from: Analytics.CLUB #WEB Newsletter