WILL OUR FUTURE HAVE UBER FOR EVERYTHING?

When someone says Uber, the first thing comes in our mind is the excellent cab service that we are using so frequently. Millions of riders globally love this service so much that it has become its own verb now. It is the service which can be reserved anytime or can be delivered to you anywhere.
New companies are trying to implement this business model to compete in the market. It all depends on the market capture and greater market capitalization than their rivals. But most of them are limited to the particular geographic markets.

If we do the analysis to determine which companies and which markets will be truly transformational, we need to look for those who can provide a service where the greatest percentage of crowd is in pain and unhappy with the existing providers. Many service sectors suffer major challenges in terms of availability, quality, transparency, and pricing, which leads to innovation ideas.

Reliability
The important thing of most On Demand Mobile Services is to use the smartphone as your life’s remote control, so that when you click the button for your things need to happen, as fast, consistent and accurately as possible. Uber service leverages local network effects between its users, and invests heavily in Machine Learning and algorithms to ensure minimum waiting time for riders and busy schedule for drivers to facilitate them more income. Without short wait times Uber would not be nearly the magical experience today we all love.

Service
Businesses which include services are so hard to create because they are dependent on people to interact with customers directly. Managing a group of people, is a lot more tedious task than executing software routines. For any geography, recruiting, selecting, training, and managing workers is a core element of any ODMS. Background checks, license verification, detailed applications and face to face interviews are all part of the selection process.

There are simply no shortcuts when it comes to satisfy market and delivering consistently great services levels. At last, quality can make or break a business regardless of whether they have the best looking mobile app or any other UI stuff.

Price
It seems that tech enabled services are much more efficient than traditional businesses at acquiring customers and aggregating demand through digital channels, efficient marketing, and highly visible brands. This often enables cutting out hidden layers of middlemen in the value chain. Passing a good portion of these savings on to consumers is perhaps the smartest way to generate trial, grow quickly and hook customers on your service.

Payments
Customers expect that effortless payment should be in a service that has been newly redefined as on demand and mobile. The nice part is that this also solves many business model problems around the workers handling cash or credit card numbers, and leakage from your workers attempting to cut you out of a side deal they offered your customer. Over time, however, we will see the clear majority of ODMS handle payments in the background as part of the consumer experience.

So, will there be an Uber for every service industry? There will be some, but not many in terms of a global, dominant, hugely valuable iconic brands. Some industries are just not important and/or frequent enough to our daily lives, or unpleasant enough as they exist today, whereas other industries face service challenges so fundamentally hard to solve that it will be a long while before we see an ODMS truly solve them.

So, while there may only be a handful or so of Uber-sized winners, there will be many smaller ODMS who find some degree of success, and the biggest winners of all will be consumers themselves.

Author: Xaltius

This content is not for distribution. Any use of the content without intimation to its owner will be considered as violation.

MISCONCEPTIONS OF MACHINE LEARNING

The large increase in data volume leads to the upsurge in research – the term ‘Big Data’ is one of the hottest words used in today’s world. The influence of big data is pervading in science, business, industry, government, society etc.

​Processing of big data includes collection, storage, transportation and exploitation. The term is defined by four V’s- Volume, Velocity, Veracity and Variety. Volume is the size of data that needs to be processed using algorithms. Velocity is streaming of data which is increasing very rapidly that might be too fast that can be handled by traditional algorithms and systems. Veracity deals with the quality of data. In fact, the quality of the data decreases as the data size becomes bigger and bigger. Variety is presenting different data types and modalities for a given object. Now-a-days all traditional process-oriented companies are trying to become knowledge-based companies driven by data and not by process.

MACHINE LEARNING WITH BIG DATA:
Data analytics comprises of various machine learning techniques which has some misconceptions as well. Below are the points which tells what misleading arguments are present in this big data era.

Models are irrelevant now:
The argument on the selection of models of small data era for the big data still going on as many think that sophisticated models are not fit for big data. But this is not the case if we carefully observe the empirical results. We can’t conclude that the worst-performing model on small data is the simplest one and vice-versa. Also, there is no proof that the simplest model on small data will achieve the best performance with big data.
With the rapid increase in computational techniques, we can conclude that in the bid data era, sophisticated models become more favored since simple models are usually incapable of handling large amount of data due to memory issues.

Correlation is enough
Some books claim that only finding correlation from big data is enough. But the fact is that the role of causality can never be replaced by correlation. Sometimes discovering valid correlation is able to provide some helpful information but ignoring the importance of correlation and taking the replacement of causality by correlation as a feature of the big data era can be dangerous.

Previous Methodologies are useless
Many claims that previous research methodologies were designed for small data and hence they cannot work on big data as expected. But this is relative term. What we called small data today may be as good as the big data of that time. So, continuously research work is going on to handle big data since early years. Infact, high-performance computing, parallel and distributed computing, high efficiency storage, etc., will remain very popular in future as well.

Opportunities and Challenges
We need to consider about the increasing data size because we do the repeated scans of the entire dataset while doing performance measurements e.g., AUC. So, the question comes that can we identify valuable datasets from our original dataset?

Again, we can ask if there is any possibility to develop a parameter tuning guide so that current exhaustive search can be replaced? Another area can be noted down related to big data is statistical hypothesis testing. We can verify that what we have done is correct. The same task can be done by deriving interpretable models. Rule extraction and Visualization are another important approach on improving the comprehensibility of models. Another open problem in this big data era is can we really avoid the violation of privacy concerns, as this is still long-standing unresolved issue.

Author: Xaltius

This content is not for distribution. Any use of the content without intimation to its owner will be considered as violation.

OPTIMIZING BIG DATA

In the past, mining big data by algorithms, which are scalable in nature by leveraging parallel and distributed architecture, was the focus of numerous conferences and workshops. Volume, the third most important aspect of big data is now readily available. These datasets, which was in terabytes, are now measured in petabytes, since the data is being collected from individuals, business transactions and scientific experiments. Veracity, the fourth aspect of big data which focuses on data quality. Data quality has been focus because of complexity, noise, missing values or imbalance datasets. Another problem which velocity presents are to build streaming algorithms, which could take care of data qualities issues. This fields are still in learning or inception stage. Variety of data which is unstructured and structured such as social media, images, video and audio presents opportunity for data mining community. A fundamental challenge would be integrating these values into single features vector presentation. The last decade has seen social media growing which is contributing to big data field.

From Data to Knowledge to Discovery to Action
Alternative insights or a hypothesis leads to scenarios, which can be also weighted. Data driven companies have seen a productivity increase to 5-6 % as per Bryson Eyjolfsson report which studied about 179 companies.
Healthcare is another area which big data application are seen a tremendous growth. For example, United Healthcare, is making use of big data for mining customer attitudes to identify customer sentiments and satisfaction. These analytics could lead to quantifiable and actionable insights.

GLOBAL OPTIMIZATION WITH BIG DATA
Another area where big data is offering opportunity and challenges is global optimization. The objective is to maximize decision making variables. The meta heuristic algorithms has been successfully applied to complex and large-scale systems, reengineering and reconstructing organic networks.

Global Optimization of Complex Systems
The key aspect is club decision variables with high correlation and independent variables in another group. Multi objective optimization are as
Weighted aggregation based methods,
Pareto dominance based approaches, and
Performance indicator based algorithms.
However, the problem with the above-mentioned approaches is that these methods are not efficient when the number of objective is larger than three.

Big Data in Optimization
Analysis and mining is also challenging since the data is huge and might be stored in different forms and be present with noise. In short, such data is a perfect example of the four V’s representing big data.

INDUSTRY, GOVERNMENT AND SOCIETY WITH BIG DATA
In this discussion, emphasis is laid on the future practices rather than presenting a scientific point of view.

Decentralizing Big Data
“Crowdsourced” data, i.e. the data collected by tech giants like Google, facebook, Apple etc. can be used to deliver new insights. It’s a two-way deal in which the users knowingly or unknowingly are depositing their data in exchange for better services. This data is also made available to others on purpose and/or inappropriately.
With the data being stored at a centralized server, concerns about the privacy and security of the data are being raised. Many governments are introducing new regimes for the internet companies. It is felt that there is a need of privacy friendly cloud platforms in which the data is stored with the user rather than on the cloud.
Decentralizing data would give rise to the problem of extreme data distribution. The internet companies will face three challenges:
• Scale of the data
• Timeliness of model building
• Migration from centralized to decentralized

Scaled Down Targeted Sub-Models
It is always preferable to use the entire data to avoid the need for sampling. A lot of research is done generally on the algorithms. Since data is now so massive, the challenges presented to the data collection, storage, management, cleansing and transformation cannot be ignored. A possible solution could be to massively partition the data into many overlapping subsets to represent many subspaces so that we can accurately uncover the multitudes of behavioral patterns. A community of thousands of micro models can be used as an ensemble to operate over the entire population.

Right Time, Real Time Online Analytics
Organizations cannot afford to spend months to build models. It is clear that there is a need to build models in real-time to respond in real-time and that learn and change their behavior in real-time. There is a need for developing dynamic, agile models working in real-time.

Extreme Data Distribution: Privacy and Ownership
Aggregating data centrally is potentially dangerous as it poses great privacy and security risks. Single point of failure on the centralized data, i.e. one flaw or one breach may lead to devastating consequences on massive scale. Personal data needs to move back to the individual to whom the data belongs. Instead of centralizing all the computation, there is a need to bring the computation to intelligent agents running on personal devices and communicating with the service providers. Personal data can be encrypted and the smart devices can decrypt it. The smart devices interact with each other on the mesh network and insights can be found without having to collect the data on a central server.

Opportunities and Challenges
Once the decentralization of data is done, the only problem would be to tackle the extreme data distribution problem. With the data being distributed widely, we will be challenged to provide the services we have come to expect with massive centralized data storage. The challenges presented then would be:
• Appropriately partitioning data to identify behavioral groups within which we can learn and to level model and learn at an individual level.
• Refocus again on delivering learning algorithms that self-learn in real time and do it online.

Author: Xaltius

This content is not for distribution. Any use of the content without intimation to its owner will be considered as violation.

‘VIRTUAL PHYSIOTHERAPIST’ HELPS PARALYZED PATIENTS EXERCISE USING COMPUTER GAMES

According to new research, a simple device can improve the ability of patients with arm disability to play physiotherapy-like computer games.

The low-cost invention, called gripAble™, consists of a lightweight electronic handgrip, which interacts wirelessly with a standard PC tablet to enable the user to play arm-training games. To use it, patients squeeze, turn or lift the handgrip, and it vibrates in response to their performance whilst playing. The device uses a novel mechanism, which can detect the tiny flicker movements of severely paralysed patients and channel them into controlling a computer game.
Special-training computer games, controlled by the device, have been designed for people with no previous experience of using computers. For example one computer game requires the user to squeeze repeatedly to slowly reveal a photograph.
In a new study published in PLOS ONE, researchers from Imperial College London have shown that using the device increased the proportion of paralysed stroke patients able to direct movements on a tablet screen by 50 per cent compared to standard methods. In addition, the device enabled more than half of the severely disabled patients in the study to engage with arm-training software, whereas none of the patients were able to use conventional control methods such as swiping and tapping on tablets and smartphones.

Over five million people in the UK live with arm weakness — approximately one million of them following a stroke, plus others who have neurological and musculoskeletal conditions. Arm weakness contributes to physical disability that requires expensive long-term care. For example, treatment for stroke costs the NHS £9 billion a year, which is five per cent of the total NHS budget. The only intervention shown to improve arm function is repetitive, task-specific exercise but this is limited by the cost and availability of physiotherapists.

The gripAble™ device is designed for patients to use unsupervised in hospital and at home. The research tested the gripAble™ device with stroke patients who had suffered successive strokes with arm paralysis at Imperial College Healthcare NHS Trust over six months. The researchers assessed their ability to use gripAble™ to control mobile gaming devices such as tablets that could be used for rehabilitation and compared this to their use of conventional methods such as swiping and tapping.

They found that 93 per cent of patients were able to make meaningful movements to direct the cursor as a result of using gripAble™. In contrast, 67 per cent of patients were able to use mobile gaming devices by swiping on a tablet. For other types of control over the tablet, such as tapping or using joysticks, the number of patients able to make meaningful movements was lower.

The success of the device was most apparent for patients with severe arm weakness: no patients in this group were able to use conventional controls to play training games, whereas 58% could use gripAble™.

In a smaller sub-group the trial also demonstrated that severely disabled patients could play computer games that involve tracking a target with almost as good accuracy as healthy people.

The clinical trial was carried out at Charing Cross Hospital, part of Imperial College Healthcare Trust, between 2014 and 2015. The team is now carrying out a feasibility study in North West London to test the use of the device in patients’ homes.

The potential of gripAble™ as a means of delivering cost-effective physiotherapy was recognised by a NHS England Innovation Challenge Prize in early 2016.

Lead researcher Dr Paul Bentley, who is a Clinical Senior Lecturer at Imperial College London and Honorary Consultant Neurologist at Imperial College Healthcare NHS Trust , said: “In the UK 100,000 new cases of arm weaknesses are diagnosed each year following a stroke. Often this impairs people’s ability to carry out daily activities, requiring long-term care. The use of mobile-gaming could provide a cost-effective and easily available means to improve the arm movements of stroke patients but in order to be effective patients of all levels of disability should be able to access it.

“We have developed the gripAble™ device to improve arm and cognitive function of patients who have mild to severe arm weaknesses. Unlike other therapies currently on the NHS, gripAble™ is a low cost device which can be used in hospitals and independently by patients at home. As such it could potentially help save the health service millions of pounds. We now intend to further develop the device so we can help more patients who are currently suffering from the effects of poor arm and upper body mobility.”

The researchers collaborated with Human Robotics Group at Imperial College London to develop the device. The research is funded by the Imperial Confidence in Concept Award, the NHS England Innovation Challenge Prize, and the EU 7th Framework Programme for Research and Technological Development grants.

The gripAble™ device is an example of the work of the Imperial Academic Health Science Centre (AHSC). This is a partnership between Imperial College London and three NHS Trusts, which aims to improve patient outcomes by harnessing scientific discoveries and translating them as quickly as possible into new diagnostics, devices and therapies, in the NHS and beyond. The researchers are working with Imperial Innovations, the College’s technology transfer partner, to spinout gripAble™ as a digital healthcare start-up to commercialise the device.

Journal Reference: Rinne P, Mace M, Nakornchai T, Zimmerman K, Fayer S, Sharma P, et al. Democratizing neurorehabilitation: how accessible are low-cost mobile-gaming technologies for self-rehabilitation of arm disability in stroke? PLoS ONE, October 2016 DOI: 10.1371/journal.pone.0163413

Author: Xaltius

This content is not for distribution. Any use of the content without intimation to its owner will be considered as violation.

KNOWLEDGE MANAGEMENT – DEAD OR ALIVE?

Well, it is definitely said that all technologies must evolve or die. It was in 2003 that knowledge management was first proclaimed ‘dead’. ​At that point in time, based on the definition of knowledge management as formalizing the management of an enterprise’s intellectual assets, Gartner had estimated enterprise use at greater than fifty percent. But with every passing day, technologists and employees in general became less interested in acquiring and sharing knowledge, primarily due to knowledge being tied up in politics, ego and culture.

Fast forward to 2016, going into 2017. Knowledge management is re-emerging and evolving at a tremendous pace, much unlike the past decade or so.

Why this change? – ​Three main reasons have caused this uproar. Social media, crowdsourcing & IBM Watson-like search and analytics.

Social media was pivotal in permeating the computer society and source like wiki, quora became and tool of knowledge management. Organizations started establishing their own enterprise social media platforms to share and store information. Crowdsourcing (gathering information from the crowd) became a viral source since abundant knowledge could now be obtained from a hoard of sources. And lastly, the rise of cognitive computing changed the outlook of search.

Google trends, when analysed for the last five years, shows how crowdsourcing has influenced the revival of knowledge management from its ‘last breath’.

Earlier, it was just about content. With the addition of new ways to gather knowledge, it has become more collaborative. This has had a major impact in boosting the re-use of knowledge management technologies. Many organizations especially in manufacturing, maintenance, healthcare are re-adapting its use and coming up with new knowledge management systems.

In conclusion, this decade will see a fair amount of use of knowledge management and an increase in its demand.

Xaltius is one such company which deals with knowledge management and offers knowledge engineering services. To know more about the company and how it caters to knowledge engineering, please use the link below:
XALTIUS

Author: Xaltius

This content is not for distribution. Any use of the content without intimation to its owner will be considered as violation.

Page 12 of 12« First...89101112