Dataset Viewer
Auto-converted to Parquet
article_id
int64
6
10.2M
title
stringlengths
6
181
content
stringlengths
1.17k
62.1k
excerpt
stringlengths
7
938
categories
stringclasses
18 values
tags
stringlengths
2
806
author_name
stringclasses
605 values
publish_date
stringdate
2012-05-21 07:44:37
2025-07-11 00:01:12
publication_year
stringdate
2012-01-01 00:00:00
2025-01-01 00:00:00
word_count
int64
200
9.08k
keywords
stringlengths
38
944
extracted_tech_keywords
stringlengths
32
191
url
stringlengths
43
244
complexity_score
int64
1
4
technical_depth
int64
2
10
industry_relevance_score
int64
0
7
has_code_examples
bool
2 classes
has_tutorial_content
bool
2 classes
is_research_content
bool
2 classes
64,823
How To Build Your Data Science Competency For A Post-Covid Future
The world collectively has been bracing for a change in the job landscape. Driven largely by the emergence of new technologies like data science and artificial intelligence (AI), these changes have already made some jobs redundant. To add to this uncertainty, the catastrophic economic impact ofΒ  the Covid-19 pandemic has brought in an urgency to upskill oneself to adapt to changing scenarios. While the prognosis does not look good, this could also create the demand for jobs in the field of business analytics. This indicates that heavily investing in data science and AI skills today could mean the difference between you being employed or not tomorrow. By adding more skills to your arsenal today, you can build your core competencies in areas that will be relevant once these turbulent times pass over. This includes sharpening your understanding of business numbers and analysing consumer demands – two domains which businesses will heavily invest in very soon. But motivation alone will not help. You need to first filter through the clutter of online courses that the internet is saturated with. Secondly, you need to create a study plan that ensures that you successfully complete these courses. We have a solution. Developed with the objective of providing you a comprehensive understanding of key concepts that are tailored to align with the jobs of the future, Analytix Labs are launching a series of special short-term courses. These courses will not only help you upskill yourself, it will also ensure that you complete these courses in a matter of a few days. These short-term courses will have similar content as regular ones, but packed in a more efficient way. Whether you are looking for courses in business analytics, applied AI, or data analytics, these should hold you in good stead for jobs of the future. Analytics Edge (Data Visualization & Analytics)About The Course: This all-encompassing data analytics certification course is tailor-made for analytics beginners. It would cover key concepts around data mining, and statistical and predictive modelling skills, and is curated for candidates who have no prior knowledge about data analytics tools. What is more, the inclusion of popular data visualization tool Tableau makes it one of the best courses available on the subject today. Additionally, it also puts an emphasis on widely used analytics tools like R, SQL and Excel, making this course truly unique. Duration: While the original data analytics course this short-term course is developed from includes 180 hours of content and demands an average of 10-15 hours of weekly online classes and self-study. This course will enable you to acquire the same skills, but within a shorter period of time. Target Group: While anyone with an interest in analytics can pursue this course, it is especially targeted at candidates with a background in engineering, finance, math, and business management. It will also be a useful skill-building course for candidates who want to target job profiles based around R programming, statistical analysis, Excel-VBA or tableau-based BI analyst profiles. Data Science Using Python About the course: Adapted to greatly help candidates when searching for data science roles, this certification covers all that they need to know on the subject using Python as the programming language. While other languages like R are also commonly used today, Python has emerged as one of the more popular options within the data science universe. This β€˜Python for Data Science’ course will make you proficient in defly handling and visualizing data, and also covers statistical modelling and operations with NumPy. It also integrates these with practical examples and case studies, making it a unique online training data science course in Python. Duration of the course: While the original data science course this short-term course is developed from includes 220 hours of content and demands an average of 15-20 hours of weekly online classes and self-study, this course will enable you to acquire the same skills, but within a shorter period of time. Target Group: While anyone with an interest in analytics can pursue this course, it is especially targeted at candidates with a background of working with data analysis and visualization techniques. It will also help people who want to undergo Python training with advanced analytics skills to help them jumpstart a career in data science. Machine Learning & Artificial IntelligenceAbout this course: This course delves into the applications of AI using ML and is tailor-made for candidates looking to start their journey in the field of data science. It will cover tools and libraries like Python, Numpy, Pandas, Scikit-Learn, NLTK, TextBlob, PyTorch, TensorFlow, and Keras, among others. Thus, after successful completion of this Applied AI course, you will not only be proficient in the theoretical aspects of AI and ML, but will also develop a nuanced understanding of its industry applications. Duration of the course: While the original ML and AI course this short-term course is developed from includes 280 hours of content and demands an average of 8-10 hours of weekly self-study, this Applied AI course will enable you to acquire the same skills, but within a shorter period of time. Target Group: While anyone with an interest in analytics can pursue this course, it is especially targeted at candidates with a background in engineering, finance, math, statistics, and business management. It will also help people who want to acquire AI and machine learning skills to head start their career in the field of data science. Summary While the Covid-19 pandemic has witnessed a partial – or even complete – lockdown at several places across the globe, people have been reorienting their lives indoors. But with no end in sight, it necessitates that professionals turn these circumstances into opportunities to upskill. Given an oncoming recession and economic downturn, it behoves them to adapt to these changes to remain employable in such competitive times. In this setting, Covid-19 could emerge as a tipping point for learning, with virtual learning offering the perfect opportunity to self-learn.
The world collectively has been bracing for a change in the job landscape. Driven largely by the emergence of new technologies like data science and artificial intelligence (AI), these changes have already made some jobs redundant. To add to this uncertainty, the catastrophic economic impact ofΒ  the Covid-19 pandemic has brought in an urgency to […]
["AI Trends"]
["Applications of Data Mining", "covid-19", "Data analyst jobs", "Data Science", "what is data science"]
Anu Thomas
2020-05-08T12:00:00
2020
988
["what is data science", "data science", "scikit-learn", "artificial intelligence", "machine learning", "covid-19", "AI", "PyTorch", "Keras", "ML", "Data analyst jobs", "Applications of Data Mining", "analytics", "Data Science", "TensorFlow"]
["AI", "artificial intelligence", "machine learning", "ML", "data science", "analytics", "TensorFlow", "PyTorch", "Keras", "scikit-learn"]
https://analyticsindiamag.com/ai-trends/how-to-build-your-data-science-competency-for-a-post-covid-future/
3
10
2
false
true
true
10,060,860
Mathangi Sri appointed as Chief Data Officer of CredAvenue
Mathangi Sri has been appointed as the Chief Data Officer at CredAvenue, a debt product suite and marketplace company. Sri joined CredAvenue from Gojek, where she headed data strategy for GoFood and played a key role in building various AI and ML solutions. β€œI would be building the data strategy encapsulating data sciences,Β  ML engineering, data governance and data engineering. I believe in data as the first principle thinking, and thus will focus on building high-impact data platforms that deliver solid business impacts. Data is at the core of every operation at CredAvenue and I am looking forward to an exciting journey building world-class solutions powering the debt marketplace,” said Mathangi Sri, Chief Data Officer at CredAvenue. Mathangi Sri has over 18 years of proven track record in building world-class data science solutions and products. She has overall 20 patent grants in the area of intuitive customer experience and user profiles. Mathangi has recently published a book called β€œPractical Natural Language Processing with Python”. β€œMathangi’s exceptional experience in data science will help us shape CredAvenue’s journey towards becoming a more futuristic company. We plan to invest significantly in our data platform and empower our customers to manage their transactions actively,” Gaurav Kumar, founder and CEO of CredAvenue said. Mathangi Sri has worked with organisations like Citibank, HSBC, GE and tech startups like 247.ai, PhonePe. She is also an active contributor in the data science community.
Mathangi Sri has over 20 patent grants in the area of intuitive customer experience and user profiles.
["AI News"]
["chief data officer", "gojek", "Mathangi Sri"]
SharathKumar Nair
2022-02-17T11:06:01
2022
235
["data science", "Go", "Mathangi Sri", "AI", "chief data officer", "gojek", "ML", "Python", "data engineering", "data governance", "GAN", "R", "startup"]
["AI", "ML", "data science", "Python", "R", "Go", "data engineering", "data governance", "GAN", "startup"]
https://analyticsindiamag.com/ai-news-updates/mathangi-sri-appointed-as-chief-data-officer-of-credavenue/
2
10
3
true
false
false
2,077
Interview – Ajay Ohri, Author “R for Business Analytics”
Ajay Ohri of Decisionstats.com has recently published β€˜R for Business Analytics’ with Springer. The book is now available on Amazon at http://www.amazon.com/R-Business-Analytics-A-Ohri/dp/1461443423 The introduction of the book- R for Business Analytics looks at some of the most common tasks performed by business analysts and helps the user navigate the wealth of information in R and its 4000 packages.Β  With this information the reader can select the packages that can help process the analytical tasks with minimum effort and maximum usefulness. This book is aimed at business analysts with basic programming skills for using R for Business Analytics. Note the scope of the book is neither statistical theory nor graduate level research for statistics, but rather it is for business analytics practitioners. In an Interview with Analytics India Magazine, Ajay talks about his experience of writing the book and about his take on R and similar statistical software. [dropcap style=”1β€³ size=”2β€³]AIM[/dropcap]Analytics India Magazine: How did you decide to write a book on R especially for Business Analytics professionals? [dropcap style=”1β€³ size=”2β€³]AO[/dropcap]Ajay Ohri: I got involved in R in 2007 when I created my startup in business analytics consulting, since I could not afford my existing tool called Base SAS. After learning it for a couple of years, I found that the existing documentation and literature was aimed more at statisticians than at MBAs like me who wanted to learn R for Business Analytics. So I sent a proposal to Springer Publishing and they accepted and so I wrote the book. AIM: What did it take to have a book published? AO: An idea, a good proposal, and 2 years of writing and 6 months of editing. Lots of good luck, and good wishes from my very patient instructors and mentors across the world. AIM: How is R different from other statistical tools available in market? What are its strengths and weaknesses vis-Γ -vis SAS and SPSS? AO: R is fundamentally different from SAS language (which is divided into procedures and data steps) and the menu driven SPSS. It is object oriented, much more flexible, hence powerful, yet confusing to the novice, as there are multiple ways to do anything in R. It is overall a very elegant language for statistics and the strengths of the language are enhanced by nearly 5000 packages developed by leading brains across the universities of the planet. AIM: Which R packages do you use the most and which ones are your favorites? AO: I use R Commander and Rattle a lot, and I use the dependent packages. I use car for regression, and forecast for time series, and many packages for specific graphs. I have not mastered ggplot though but I do use it sometimes. Overall I am waiting for Hadley Wickham to come up with an updated book to his ecosystem of packages as they are very formidable, completely comprehensive and easy to use in my opinion, so much I can get by the occasional copy and paste code. AIM: What level of adoption do you see for R as a preferred tool in the industry? Are Indian businesses also keen to adopt R? AO: I see surprising growth for R in Business, and I have had to turn down offers for consulting and training as I write my next book R for Cloud Computing. Indian businesses are keen to cut costs like businesses globally, but have an added advantage of having a huge pool of young engineers and quantitatively trained people to choose from. So there is more interest in India for R, but is growing thanks to the efforts of companies likeΒ SAP, Oracle, Revolution Analytics and R Studio who have invested in R and are making it more popular. The R Project organization is dominated by academia, and this reflects the fact their priorities is making the software better, faster, stabler but the rest of the community has been making efforts to introduce it to industry. AIM: How did you start your career in analytics and how were you first acquainted with R? AO: I started my career after MBA in selling cars, which was selling a lot of dreams and managing people telling lies to people to sell cars. So I switched to Business Analytics thanks to GE in 2004, and I had the personal good luck of having Shrikant Dash, ex CEO GE Analytics as my first US client. He was a tough guy and taught me a lot. I came to R only after leaving the cozy world of corporate analytics in 2007. AIM: Are you working on any other book right now? AO: I am working on β€œ R for Cloud Computing” for Springer, besides my usual habit of writing my annual poetry book (which is free) and is tentatively titled β€œUlysses in India” . My poetry blog is at http://poemsforkush.com and my technology blog is at http://decisionstats.com and I write there when not writing or pretending to write books. AIM: What do you suggest to new graduates aspiring to get into analytics space? AO: Get in early, pick up multiple languages, pick up business domain knowledge, and work hard. Analytics is very lucrative and high growth career. You can read my writings on analytics by just googling my name. AIM: How do you see Analytics evolving today in the industry as a whole? What are the most important contemporary trends that you see emerging in the Analytics space across the globe? AO: I don’t know how analytics will evolve, but it will grow bigger and more towards the cloud and bigger data sizes. Big Data /Hadoop, Cloud Computing, Business Analytics and Optimization, Text Mining, are some of the buzz words that are currently in fashion. [divider top=”1β€³] [spoiler title=”Biography of Ajay Ohri” open=”0β€³ style=”2β€³] Ajay Ohri is the founder of analytics startup Decisionstats.com.Β  He has pursued graduate studies at the University of Tennessee, Knoxville and the Indian Institute of Management, Lucknow.Β  In addition, Ohri has a mechanical engineering degree from the Delhi College of Engineering. He has interviewed more than 100 practitioners in analytics, including leading members from all the analytics software vendors.Β  Ohri has written almost 1300 articles on his blog, besides guest writing for influential analytics communities. He teaches courses in R through online education and has worked as an analytics consultant in India for the past decade. Ohri was one of the earliest independent analytics consultant in India, and his current research interests include spreading open source analytics, analyzing social media manipulation, simpler interfaces to cloud computing and unorthodox cryptography.[/spoiler]
Ajay Ohri of Decisionstats.com has recently published β€˜R for Business Analytics’ with Springer. The book is now available on Amazon at http://www.amazon.com/R-Business-Analytics-A-Ohri/dp/1461443423 Β The introduction of the book- R for Business Analytics looks at some of the most common tasks performed by business analysts and helps the user navigate the wealth of information in R and […]
["AI Features"]
["Interviews and Discussions", "Oracle Interview"]
Π”Π°Ρ€ΡŒΡ
2012-11-16T15:50:26
2012
1,085
["big data", "Go", "startup", "programming_languages:R", "AI", "cloud computing", "Oracle Interview", "Aim", "analytics", "GAN", "R", "Interviews and Discussions"]
["AI", "analytics", "Aim", "cloud computing", "R", "Go", "big data", "GAN", "startup", "programming_languages:R"]
https://analyticsindiamag.com/ai-features/interview-ajay-ohri-author-r-for-business-analytics/
4
10
4
false
false
false
30,533
How Is AI Guiding The Navigation Of Autonomous Planetary Rovers
Source: NASA Whether locating a position of a planet or taking pictures, human dependency in planetary rovers might take long hours. With an advanced AI, researchers are developing deep learning algorithms to perform the necessary image observation and shorten the localization process drastically. This article highlights some of the common challenges in the current navigation system and how the use of machine learning can ensure a better future. It is based on a recent research by the University of Glasgow. Planetary Rover Navigation The Planetary Rover Navigation (PRN) requires a robot to design a route map consisting of a feasible route map from a starting pose to a final destination in an optimal manner. Leading Space research organizations like ISRO, NASA, and JAXA are adopting this technique which aids in finding an appropriate direction. This technique can be divided into two scenarios: Global path planning This technique focuses on finding high-level routes based on prior knowledge of the surroundings and is valid for generating an optimal high-level procedure for a rover to execute. Yet, this method is incomplete for handling the dynamic environment. 2.Local path planning This technique depends upon sensory information to ensure global plans are accomplished exactly and possible collisions are prevented. Overcoming Challenges In Current System Taking into account the execution time, memory over the head, and whether the environment of the search machine is static, dynamic or real-time deterministic, an adaptive feature selection approach to terrain classification, based on the random forest method is presented using a auto-learning framework to train a visual classifier, fundamental aspects of information is extracted from geometric features associated with the terrain. Additionally, learning based fuzzy and neural network access have made an improvement. These applied sciences focus on the accurate navigation of a mobile robot with adjustable speeds while avoiding local minima. A robot is adept of manipulating through the obstacle by self-learning from experience. These methods illustrate how deep learning techniques can be used to overcome problems associated with the unknown and mysterious celestial bodies exploration. Machine Learns To Find A Path: AI In Rovers Since the advancement of AI into space exploration, a number of developed programmes have enhanced the certainty and capability of direction determining procedures. Presently the most crucial area that interests the scientists is the arrangement of high realistic directions for the rovers. Path Finding Algorithms The operation consists of two main steps: graph generation and a pathfinding algorithm. The graph generation problem for terrain topology is acknowledged as a foundation of robotics in space exploration. In this scenario, the route navigation experiments in diverse uninterrupted environments such as known 2D/3D and unknown 2D environments. Each of these experiments has one of the two techniques, skeletonization or cell decomposition. Skeletonization In the skeletonization procedure, a skeleton formed from the uninterrupted environment. This skeleton apprehends the notable topology of the traversable space by defining a graph G=(V, E), where V is a set of vertices that map to a coordinate in the uninterrupted environment and E is the set of edges connecting vertices that are in the line of sight of one another.skeletonization technique can produce two types of uneven grid, namely, a visibility graph or a waypoint graph. Cell Decomposition Cell decomposition technique breaks down the traversable space in the uninterrupted environment into cells. Each cell is commonly represented by a circle or convex polygons that do not contain obstructions. Machines can travel in a straight line between any two coordinates within the same unit. source: JAXA (Japan Aerospace Exploration Agency) A* Search algorithm Further, in the direction finding process, the issue is to return the optimal way to the machine in a dynamic technique. A* is the notable search algorithm for robotics. It was the first algorithm to use a heuristic function to travel a search graph in an optimal first manner, the search develops from the origin node until the objective node is found. A* inspired many modified and improved algorithms. Concluding Note It is widely accepted that exceptional outcomes were seen with the endeavor of AI into more complex and harsh environments. It is fair to say that adaptive, intelligent and more generalized methods will play a crucial role in gearing up the planetary rovers with the essential facilities to interact with the environment in a truly sovereign way. Though it still remains a challenge for complete accuracy, with the day to day advancements in adaptive self-learning systems the future of the space rovers is assured with accuracy.
Whether locating a position of a planet or taking pictures, human dependency in planetary rovers might take long hours. With an advanced AI, researchers are developing deep learning algorithms to perform the necessary image observation and shorten the localization process drastically. This article highlights some of the common challenges in the current navigation system and […]
["AI Features"]
["autonomous systems"]
Bharat Adibhatla
2018-11-22T09:21:45
2018
746
["ai_frameworks:JAX", "Go", "machine learning", "programming_languages:R", "AI", "neural network", "autonomous systems", "deep learning", "JAX", "GAN", "R"]
["AI", "machine learning", "deep learning", "neural network", "JAX", "R", "Go", "GAN", "ai_frameworks:JAX", "programming_languages:R"]
https://analyticsindiamag.com/ai-features/how-is-ai-guiding-the-navigation-of-autonomous-planetary-rovers/
3
10
0
true
false
true
50,991
Data Privacy: How Big Tech Companies Like Facebook Cross The Line
The last few years, we have seen data privacy issues becoming mainstream. In many cases, big tech companies have been found to have mishandled consumer data, or mining data without their consent. The case for data privacy is becoming even more relevant as we move into the age of AI. The argument is hot, and tech companies are already being put to centre stage. Large tech companies including Facebook, Google and Amazon have found multiple critics yet it seems there is a race among both companies and nations to acquire data as it has been consistently touted as the new-age oil, and possibly more powerful than it. But, constant privacy and data breaches have made put to the forefront the importance of privacy, especially in the West. Companies Hungry For Hyper-Personalisation At The Expense Of Privacy There is a major aspect to data usage on the consumer side. Companies are hungry for hyper-personalisation, meaning to gain a competitive edge they want to know everything about a particular customer, his needs and behaviours on a given tech platform to make useful recommendations. According to Hemant Misra, Head of Applied Research, there needs to be a balance between hyper-personalisation for customer experience and ethical data usage. β€œNone of the consumer tech companies with user data, except for Google, does hyper-personalisation to the extent that we have complete 360 degree of any particular user. So, we are looking at the data and do clustering in order to understand who are the other people who are similar users, their choices and needs and build recommender systems for better customer experience. The problem is when data gets joined; when Facebook acquire WhatsApp, it gave the company analytics better view on analytics across the two platforms, user devices, their social status, the places they are visiting by tracking all that using WhatsApp and joining it with the Facebook social media platforms. So, the problem is that more hyperper-sonalisation, more data is being collected and that can lead to misuse. We saw what happened with Facebook- Cambridge Analytica scandal which exposed the misuse on the Facebook platform,” Misra explained while speaking at ThoughtWorks Live 2019. The problem is that more hyper-personalisation, more data is being collected and that can lead to misuse. We saw what happened with Facebook- Cambridge Analytica scandal which exposed the misuse on the Facebook platform.Hemant Misra, Head of Applied Research, Swiggy The question that lies is where it is all headed, and what is the end goal of the data being collected by the governments. Global AI technology race is another big aspect to it. According to Sudhir Tiwari, Managing Director of ThoughtWorks India, at a time when a Go champion retired because he can’t find a way to beat AI technology, it shows the power and dominance that data and AI is bringing on. β€œThere could also be data arms race among countries, where some countries are more aggressive on data collection and algorithms which can generate insights much better. More importantly, players can’t decipher the precise strategy used by AI to dominate the game so convincingly. The same can be said about AI’s role in global power and influence, and AI needs more and data at the expense of privacy. Unless there is a global consensus on data collection and usage, privacy and potential misuse of data will remain a challenge,” says Sudhir. Why Data Privacy Will Be Valuable In Future There is also a huge debate on who owns the data. While users of web services are generating petabytes of data, they may have no control over it. Users are also the victims of data misuse in the form of surveillance and social engineering which may be influencing every aspect of human life, experts say. β€œAs consumers lose trust because of data breaches, they will start looking for alternate products. The future will be different and data privacy will be valuable. We have seen in the last 5 years how big tech companies have refused to be responsible about how they handle personal data, and so they will face the consequence of losing consumer trust. At the same time, they might even go beyond and try to bank on surveillance capitalism in the coming years. But, I think the true next phase of data revolution can only happen with transparency and security standards of user data, with a focus on good tech,” spoke Govind Shivkumar, Principal, Beneficial Technology at Omidyar Network. Jaspreet Bindra, Author of The Tech Whisperer and Digital Transformation Consultant says there is no free lunch. β€œOn Internet, we are used to getting things for free. We forget that if something is free, then you are the product. Without reading the user terms and conditions, users unknowingly give consent to companies on how their data can be used, especially in geographies where data regulations are not present currently,” BindraΒ  told. On Internet, we are used to getting things for free. We forget that if something is free, then you are the product. Without reading the user terms and conditions, users unknowingly give consent to companies on how their data can be used.Jaspreet Bindra, Author & Digital Transformation Consultant
The last few years, we have seen data privacy issues becoming mainstream. In many cases, big tech companies have been found to have mishandled consumer data, or mining data without their consent. The case for data privacy is becoming even more relevant as we move into the age of AI. The argument is hot, and […]
["AI Features"]
["Big Tech", "Data Privacy", "data protection", "surveillance", "tech companies", "what is big data"]
Vishal Chawla
2019-12-02T18:00:00
2019
860
["Go", "API", "what is big data", "surveillance", "AI", "R", "digital transformation", "programming_languages:R", "programming_languages:Go", "Git", "Data Privacy", "Big Tech", "analytics", "Rust", "tech companies", "data protection"]
["AI", "analytics", "R", "Go", "Rust", "Git", "API", "digital transformation", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/data-privacy-big-tech-companies-facebook/
3
10
0
false
false
false
10,097,178
Infosys Signs Mega AI Deal Worth $2B, Shares Rise Over 3%
India’s second-largest software services exporter Infosys has informed the stock exchanges that it has entered into a new agreement with an undisclosed established client to provide AI and automation services over a period of five years. The partnership has an estimated target spend of $2 billion. The announcement pushed up the company’s stock price up by 3.6% on the Bombay Stock Exchange (BSE). β€œInfosys has entered into framework agreement with one of its existing strategic clients to provide AI and automation led development, modernisation and maintenance services. The total client target spend over 5 years is estimated at USD 2 billion,” the company said in an exchange filing on Monday. The news comes three days before July 20 when Infosys is scheduled to release the results of its June quarter (Q1FY24). In accordance with the company’s exchange filing, the agreement includes the advancement, modernization, and upkeep of AI and automation-related services. Notably, the IT giant had recently unveiled a wide-ranging and cost-free AI certification training initiative through Infosys Springboard. This program aims to equip individuals with skills required to thrive in the future job landscape. Infosys’ AI move underscores the growing trend of Indian IT companies increasing their investments in the field of AI. Long before OpenAI’s ChatGPT hit the scene, Tech Mahindra was already working with generative AI. Notably, the IT behemoth’s chief executive, CP Gurnani, lauded the Storicool platformβ€”an auto content creation tool that proved ahead of its time. In line with this trajectory, Tata Consultancy Services (TCS) made headlines with its own foray into generative AI capabilities, joining forces with Google Cloud. Furthermore, Wipro, too, has entered a partnership with Google Cloud to harness the power of its generative AI tools, integrating them with their in-house AI models, business accelerators, and pre-built industry solutions, as per the company’s announcement. This signifies a competitive shift within the Indian IT landscape. Read more: How Indian IT Giants are Bringing GenAI to Their Clients
The undisclosed established client will provide AI and automation services over a period of five years.
["AI News"]
["AI India", "BSE", "India AI", "Indian IT", "Infosys", "TCS", "Wipro"]
Tasmia Ansari
2023-07-19T11:56:02
2023
324
["India AI", "Wipro", "ChatGPT", "GenAI", "Go", "Infosys", "AI", "OpenAI", "R", "BSE", "GPT", "Ray", "Aim", "generative AI", "AI India", "Indian IT", "TCS"]
["AI", "generative AI", "GenAI", "ChatGPT", "OpenAI", "Aim", "Ray", "R", "Go", "GPT"]
https://analyticsindiamag.com/ai-news-updates/infosys-signs-mega-ai-deal-worth-2b-shares-rise-over-3/
2
10
2
false
false
false
10,014,327
Implications Of Allowing Private Sector Into Indian Space Industry
The Department of Space recently released a draft for a new space policy, that eased the regulations on private entities to participate in space-based activities. The policy wants to promote the participation of the private industry in India to provide space-based communication, both within the country and outside, to fulfil the increasing demand for satellite bandwidth. The government thinks that private entities can play a significant role in addressing the growing demand within India and also use the opportunity to make a mark in the international space communication market. The article discusses what doors the policy opened for private companies and the possible socio-economic implications for India. Benefits of including private players Until now, the private sector largely worked in a subcontractor role with ISRO, and there was no independent actor outside the public sector. However, if the new policy is passed, private companies will be allowed to establish and operate satellite systems to provide capacity for communication. They will also allow procuring non-Indian orbital resources to build their space-based systems for communication services in and outside India. Alongside, ISRO will make its facilities and other relevant assets available to improve their capacities. The authorisation for this, however, will be overlooked by a government regulator IN-SPACe β€” a regulatory body under the Department of Space. Positive outcomes of the policy To harness the enormous potential of space opportunities both domestically and worldwide, the Indian space economy needs to scale up. There are a lot of untapped potentials that the space industry can explore given the increasing number of internet users in India. Experts also argue that in order to cater to this increasing demand, it is imperative to look beyond the traditional modes of internet delivery and look for space-based solutions. With the given infrastructure and knowledge already available through India’s space program and the vast amount of potential and resources the private sector has to offer, the new policy could help the space industry to grow and fill in the communication infrastructure deficit. Private players in India and abroad are already looking forward to participating. As a matter of fact, AWS, Amazon’s cloud arm, has recently announced a new business segment β€˜Aerospace and Satellite Solutions’ to overlook the innovations in the satellite industry. Alongside, Indian firms like Sankhya labs is also looking forward to investing. At the same time, the availability and demonstration of emerging technologies have a great significance in defining the modern-day geopolitics. Hence, given the current geopolitical situation of the country and the security threats, growth in the space sector can help the country gain leverage over others. Negative consequences of the policy Space technology is expensive and needs heavy investment. This kind of lucrative power is available only with selected rich corporates, thus can lead to monopolisation of the sector. Also, IN-SPACe’s role has been defined as a government regulator, β€˜to provide a level-playing field’ for everyone. However, in the past, this has resulted in the governments favouring the private sector over the public sector or leaning towards specific private brands. ISRO, since its inception, has always aimed to work on projects that can help India become self-reliant. The space program always worked on applications like remote sensing, tracking of land use, resource mapping, among others. However, private companies will have more profitable interests than developing solutions that cater to the immediate socio-economic needs of the country. Hence, if a situation were to arise where private companies are establishing space monopoly or gain unfair advantages from government regulators, space applications for social development will take a backseat and the public sector may not survive or slowly become irrelevant. The telecommunication sector is a case in point. Wrapping Up India has successfully demonstrated its abilities to carry out space research and projects. With this proposed new policy for space, India wants to tap into the private sector, which could help the industry grow. While that is the case, unregulated participation of the private industry in the space sector will not only have socio-economic repercussions but also might end up undermining the work that ISRO has been successfully doing for over five decades. Since private space activities will significantly increase if the policy is accepted, India needs to develop a robust legislative framework for space to ensure sustainable and inclusive growth.
The Department of Space recently released a draft for a new space policy, that eased the regulations on private entities to participate in space-based activities. The policy wants to promote the participation of the private industry in India to provide space-based communication, both within the country and outside, to fulfil the increasing demand for satellite […]
["IT Services"]
["IN-SPACe", "ISRO"]
Kashyap Raibagi
2020-12-15T11:00:00
2020
714
["Go", "ISRO", "AWS", "AI", "cloud_platforms:AWS", "innovation", "programming_languages:R", "RAG", "IN-SPACe", "Aim", "ViT", "R"]
["AI", "Aim", "RAG", "AWS", "R", "Go", "ViT", "innovation", "cloud_platforms:AWS", "programming_languages:R"]
https://analyticsindiamag.com/it-services/implications-of-allowing-private-sector-into-indian-space-industry/
2
10
3
false
false
false
10,056,800
Hands-On Guide to Hugging Face PerceiverIO for Text Classification
Nowadays, most deep learning models are highly optimized for a specific type of dataset. Computer vision and audio analysis can not use architectures that are good at processing textual data. This level of specialization naturally influences the development of models that are highly specialized in one task and unable to adapt to other tasks. So, in contrast to the General Purpose model, we will talk about PerceiverIO, which is designed to address a wide range of tasks with a single architecture. The following are the main points to be discussed in this article. Table of Contents What is Perceiver IO?Architecture of PerceiverIOImplementing Perceiver IO for Text Classification Let’s start the discussion by understanding the PerceiverIO. What is PerceiverIO? A perceiver is a transformer that can handle non-textual data like images, sounds, and video, as well as spatial data. Other significant systems that came before Perceiver, such as BERT and GPT-3, are based on transformers. It uses an asymmetric attention technique to condense inputs into a latent bottleneck, allowing it to learn from a great amount of disparate data. On classification challenges, Perceiver matches or outperforms specialized models. The perceiver is free of modality-specific components. It lacks components dedicated to handling photos, text, or audio, for example. It can also handle several associated input streams of varying sorts. It takes advantage of a small number of latent units to create an attention bottleneck through which inputs must pass. One advantage is that it eliminates the quadratic scaling issue that plagued early transformers. For each modality, specialized feature extractors were employed previously. Perceiver IO can query the model’s latent space in a variety of ways to generate outputs of any size and semantics. It excels at activities that need structured output spaces, such as natural language and visual comprehension and multitasking. Perceiver IO matches a Transformer-based BERT baseline without the need for input tokenization on the GLUE language benchmark and achieves state-of-the-art performance on Sintel optical flow estimation. The latent array is attended to using a specific output query associated with that particular output to produce outputs. To predict optical flow on a single pixel, for example, a query would use the pixel’s XY coordinates along with an optical flow task embedding to generate a single flow vector. It’s a spin-off of the encoder/decoder architecture seen in other projects. Architecture of PerceiverIO The Perceiver IO model is based on the Perceiver architecture, which achieves cross-domain generality by assuming a simple 2D byte array as input: a set of elements (which could be pixels or patches in vision, characters or words in a language or some form of learned or unlearned embedding), each described by a feature vector. The model then uses Transformer-style attention to encode information about the input array using a smaller number of latent feature vectors, followed by iterative processing and a final aggregation down to a category label. HuggingFace Transformers’ PerceiverModel class serves as the foundation for all Perceiver variants. To initialize a PerceiverModel, three further instances can be specified – a preprocessor, a decoder, and a postprocessor. A preprocessor is optionally used to preprocess the inputs (which might be any modality or a mix of modalities). The preprocessed inputs are then utilized to execute a cross-attention operation utilizing the latent variables of the Perceiver encoder. Source Perceiver IO is a domain-agnostic process that maps arbitrary input arrays to arbitrary output arrays. The majority of the computation takes place in a latent space that is typically smaller than the inputs and outputs, making the process computationally tractable even when the inputs and outputs are very large. In this technique (Referring to the above architecture), the latent variables create queries (Q), whilst the preprocessed inputs generate keys and values (KV). Following this, the Perceiver encoder updates the latent embeddings with a (repeatable) block of self-attention layers. Finally, the encoder will create a shape tensor (batch size, num latents, d latents) containing the latents’ most recently concealed states. Then there’s an optional decoder, which may be used to turn the final concealed states of the latent into something more helpful, like classification logits. This is performed by a cross-attention operation in which trainable embeddings create queries (Q) and latent generate keys and values (KV). PerceiverIO for Text Classification In this section, we will see how perceiver can be used to do the text classification. Now let’s install the Transformer and datasets module of Huggingface. ! pip install -q git+https://github.com/huggingface/transformers.git ! pip install -q datasets Next, we will prepare the data from the module. The dataset is about IMDB movie reviews and we are using a chunk of it. Later after loading the dataset, we will make it handy when doing the inference. from datasets import load_dataset # load the dataset train_ds, test_ds = load_dataset("imdb", split=['train[:100]+train[-100:]', 'test[:5]+test[-5:]']) # making the dataset handy labels = train_ds.features['label'].names print(labels) id2label = {idx:label for idx, label in enumerate(labels)} label2id = {label:idx for idx, label in enumerate(labels)} print(id2label) Output In this step, we will preprocess the dataset for tokenization. For that, we are using PerceiverTokenizer on both train and test datasets. # Tikenization from transformers import PerceiverTokenizer tokenizer = PerceiverTokenizer.from_pretrained("deepmind/language-perceiver") train_ds = train_ds.map(lambda examples: tokenizer(examples['text'], padding="max_length", truncation=True), batched=True) test_ds = test_ds.map(lambda examples: tokenizer(examples['text'], padding="max_length", truncation=True), batched=True) We are going to use PyTorch for further modelling and for that we need to set the format of our data compatible with the PyTorch. # campatible with torch from torch.utils.data import DataLoader train_ds.set_format(type="torch", columns=['input_ids', 'attention_mask', 'label']) test_ds.set_format(type="torch", columns=['input_ids', 'attention_mask', 'label']) train_dataloader = DataLoader(train_ds, batch_size=4, shuffle=True) test_dataloader = DataLoader(test_ds, batch_size=4) Next, we will define and train the model. from transformers import PerceiverForSequenceClassification import torch from transformers import AdamW from tqdm.notebook import tqdm from sklearn.metrics import accuracy_score # Define model device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = PerceiverForSequenceClassification.from_pretrained("deepmind/language-perceiver", num_labels=2, id2label=id2label, label2id=label2id) model.to(device) # Train the model optimizer = AdamW(model.parameters(), lr=5e-5) model.train() for epoch in range(20): # loop over the dataset multiple times print("Epoch:", epoch) for batch in tqdm(train_dataloader): # get the inputs; inputs = batch["input_ids"].to(device) attention_mask = batch["attention_mask"].to(device) labels = batch["label"].to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model(inputs=inputs, attention_mask=attention_mask, labels=labels) loss = outputs.loss loss.backward() optimizer.step() # evaluate predictions = outputs.logits.argmax(-1).cpu().detach().numpy() accuracy = accuracy_score(y_true=batch["label"].numpy(), y_pred=predictions) print(f"Loss: {loss.item()}, Accuracy: {accuracy}") Now, let’s do the inference with the model. text = "I loved this epic movie, the multiverse concept is mind-blowing and a bit confusing." input_ids = tokenizer(text, return_tensors="pt").input_ids # Forward pass outputs = model(inputs=input_ids.to(device)) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print("Predicted:", model.config.id2label[predicted_class_idx]) Output: Final Words Perceiver IO is an architecture that can handle general-purpose inputs and outputs while scaling linearly in both input and output sizes. As we have seen in practice, this architecture produces good results in a wide range of settings. However, we have only seen it for text data, it can also be used for audio, video, and image data as well making it a promising candidate for general-purpose neural network architecture. References Hugging Face DocumentationHugging Face BlogOfficial Colab NotebooksLink for above codes
A perceiver is a transformer that can handle non-textual data like images, sounds, and video, as well as spatial data.
["Deep Tech"]
["Data Science", "Deep Learning", "Hugging Face", "Python", "text classification"]
Vijaysinh Lendave
2021-12-22T16:00:00
2021
1,162
["Hugging Face", "text classification", "NumPy", "AI", "neural network", "PyTorch", "Transformers", "computer vision", "Python", "Ray", "Colab", "deep learning", "Deep Learning", "Data Science"]
["AI", "deep learning", "neural network", "computer vision", "Ray", "PyTorch", "Hugging Face", "Transformers", "Colab", "NumPy"]
https://analyticsindiamag.com/deep-tech/hands-on-guide-to-hugging-face-perceiverio-for-text-classification/
4
10
0
true
true
true
10,091,321
Can You Upload Your Mind and Live Forever?
After the tragic death of her partner, Martha comes across a service that lets people stay in touch with the deceased. This is the plot of an episode in the Black Mirror anthology series, which explores the darker side of tech.Β The episode, called β€˜Be Right Back’ was released in 2013, however, a decade later, keeping your loved ones’ consciousness alive even after they are dead, could be a reality with AI. Pratik Desai, the creator of KissanGPT, believes something similar may be possible by the end of this year. β€œStart regularly recording your parents, elders and loved ones. With enough transcript data, new voice synthesis and video models, there is a 100% chance that they will live with you forever even after leaving their physical body,” he tweeted. The idea is to gather sufficient data to construct a digital avatar in the form of a software, chatbot, or even a humanoid robot that resembles your loved one, allowing them to live on in a sense, forever. However, Desai faced severe backlash for the tweet. Many made comparisons to the Black Mirror episode, which delves into the aftermath of excessive dependence on technology as a means of dealing with mourning and the passing of someone close. A dangerous territory Desai received criticism because death and grief are an inevitable part of life.Β Although using AI to preserve the consciousness of our loved ones may be achievable, it could ultimately lead to denial and prolong the grieving process, resulting in further negative outcomes. Notably, in the Black Mirror episode, Martha struggles with the ethical and emotional implications of interacting with a virtual version of her deceased partner. Theo Priestley, author of β€˜The Future Starts Now’, responding to Desai’s tweet, said,Β β€œNot only are they β€˜not’ your parents but this is unethical and counter to natural grief… it’s insane. It’s transference at best, at worst, it’s preventing a person from overcoming their grief.” Similarly, a Twitter user said that AI will never be their parents. Instead, these would just be interactive tape-recordings of them.Β Another user said that it’s absolutely ghoulish to consider your family and friends as data to be immortalised with AI rather than as living, breathing people. Besides, concerns related to AIβ€˜s ability to bring back the dead is that it will most probably be available as a service provided by companies looking to profit from it. β€œWhat happens when you can no longer afford the subscription to have your dead relative around to talk to?” Priestley asks. Imagine a scenario where companies use AI-generated versions of our loved ones to sell products.Β This could give rise to serious concerns around privacy, exploitation, and emotional manipulation. Desai clarifies In a conversation with AIM, Desai mentions that the tweet was taken out of context. To elaborate, he recounts a deeply personal experience that influenced his thought behind the tweet. When Desai, who lives in the US, had a daughter, his grandmother in India was eager to meet her. β€œWe had a girl in our family after a very long time and she was very excited to meet her. I was very close [to her] and we had many things planned. But, she passed away weeks before our scheduled trip to India,” he narrated. Desai was devastated and harbours deep regrets over not having any pictures or recordings of his grandmother to show to his daughter. Clarifying further, Desai said that what he had in mind was a tool that could read bedtime stories to his daughter in her great-grandmother’s voice.Β Given the technology we have now, creating a tool, with the person’s consent, is very much possible today. However, he never intended to create a Black Mirroresque scenario here, he clarified. Uploading consciousness, a possibility? While Desai said keeping your loved ones alive through AI could be possible in a year or so, we wondered how true his statement was. Surprisingly, or shockingly for some, Desai could be speaking the truth. Last year, 87-year-old Marina Smith MBE managed to speak at her own funeral. Smith, who passed away in June 2022, was able to talk to her loved ones at her own funeral with an AI-powered β€˜holographic’ video tool. The conversational video technology was invented by StoryFile, an AI-powered video platform that is cloud-based and automatic, bringing the power of conversational video into everyone’s hands. Similarly, Somnium Space, a metaverse company, founded by Artur Sychov is already offering a similar service called β€˜Live Forever’. Sychov wants to create digital avatars of people, that will be accessible by their loved ones, even after their death. β€œWe can take your data and apply AI to it and recreate you as an avatar,” Sychov told Vice.Β β€œYou will meet the person and you would, maybe for the first 10 minutes of the conversation, not even know that it’s actually AI. That’s the goal,” he said. Interestingly, former Googler Ray Kurzweil spoke publicly about using technology to keep his father’s consciousness alive post his demise, way back in 2010. He spoke about his interest and very recently, Kurzweil also predicted that humans will achieve immortality by 2030. Tools to facilitate this already exist, and the technology is only going to get better with time. Hence, despite the ethical concerns surrounding it, the desire to reconnect with deceased loved ones could be a powerful driver for the demand for these technologies.
Despite the ethical concerns surrounding it, the desire to reconnect with deceased loved ones could be a powerful driver for the demand for these technologies.
["AI Highlights"]
[]
Pritam Bordoloi
2023-04-13T16:30:00
2023
896
["Go", "AI", "Git", "RAG", "GPT", "Ray", "Aim", "ViT", "R", "llm_models:GPT"]
["AI", "Aim", "Ray", "RAG", "R", "Go", "Git", "GPT", "ViT", "llm_models:GPT"]
https://analyticsindiamag.com/ai-highlights/can-you-upload-your-mind-and-live-forever/
2
10
0
false
true
false
2,285
The Rise of Autonomous Cars In India
Let’s talk cases first. Google was the first company to launch self-driving cars. Tesla Motors, General Motors and Ford soon followed suit. Uber, the on-demand car player last year announced a $300 million deal with Swedish car maker Volvo to develop fully driverless, or autonomous, cars by 2021. The company acquired Otto, a San Francisco based startup focused on self-driving trucks. Consider the Indian scenario and why India direly needs autonomous vehicles. Even if we leave aside the deadly traffic jams and congested roads, India ranks 3rd in terms of deaths due to road accidents. There is one death every four minutes due to a road accident in India. Moreover, 20 children under the age of 14 die everyday due to road accidents. Experts say, autonomous, IoT enabled cars have the potential to bring down the number of car accidents to a great extent. Human follies have no role to play when driving is done by intelligent machines. The day may not be that far considering – The Indian Internet of Things (IoT) market is set to grow to $15 billion by 2020 from the current $5.6 billion, according to a report by NASSCOM. However, many opine that, from a market perspective, it seems more challenging to have autonomous cars in India, than let’s say Europe or the US. Then there is the legal scenario as well as to which country will draw up a legal framework to make autonomous driving a reality. The potential for autonomous cars in India is surely huge with IT and analytics skills needed to fuel the developments in the direction. India has tons of that. India, a new entrant in connected car segment- Though a new entrant in the connected car segment, India has great potential as it needs connectivity on the go. This connectivity is required for basic aspects such as tracking of, vehicles and providing travelers with customized services. For it to take shape, a strong synergy between auto companies, telecom providers and cloud service providers is required. The scope of growth is huge as it will open up new channels of revenue for everyone involved in the connectivity value chain – be map providers. Web application developers, mobile operators, enterprise application specialists and VAS providers. Mobile technology will further catapult this growth curve. As per TRAI data, India has one billion mobile phones and mobile internet is fast surpassing broadband. For the connected car and autonomous vehicle market to evolve, there is a need of bundling all these offerings into the ecosystem for seamless functioning of bandwidth allocation, storage and content management. In the end, connected vehicles must be productive. Dr Roshy John, a robotics professional had already designed an autonomous vehicle that had been tested on Indian road. He virtually simulated a Tata Nano using algorithms that would suit Indian road conditions. He used laser scanners in place of expensive sensors. He included pedal assistance, 3D simulation and driver psychology. His autonomous model can differentiate between static and dynamic vehicles. This model is not yet commercialized though. Cyber Media Research, a research firm is of the view that it will take another generation to make autonomous vehicle transportation network viable for low automated regions such as India. As per studies, global revenues from β€œconnected cars” the forerunner to fully autonomous or self-driving cars β€” are growing at an annual rate of 27.5 per cent and are expected to touch $21 billion by 2020. For an autonomous vehicle to be effective, data is the most important factor. The timely collection, processing and sharing of data or information between vehicles and within vehicles would be imperative for it to function smoothly. For autonomous vehicles to be successful, infrastructure, laws, regulations, traffic systems, emergency response systems, manufacturing systems, data and information handling and processing systems needs to undergo swift advancement. Though this is viable, in India, completely removing the human touch is a tough thing to do. There is a large workforce of drivers and mechanics who would need to be placed in other jobs, before we can practically look at such innovations. In autonomous vehicles, safety is also an important concern. Cyber attacks and hacking can cause huge damage to autonomous vehicles. Autonomous vehicle manufacturers should make it a point to come up with strong cyber security measures to safeguard vehicle owners of such attacks. It may be take a decade for mass adoption of driverless vehicles to take place.
Let’s talk cases first. Google was the first company to launch self-driving cars. Tesla Motors, General Motors and Ford soon followed suit. Uber, the on-demand car player last year announced a $300 million deal with Swedish car maker Volvo to develop fully driverless, or autonomous, cars by 2021. The company acquired Otto, a San Francisco […]
["IT Services"]
[]
AIM Media House
2017-09-01T06:42:25
2017
738
["Go", "AWS", "AI", "RPA", "ML", "innovation", "RAG", "ViT", "analytics", "R"]
["AI", "ML", "analytics", "RAG", "AWS", "R", "Go", "ViT", "RPA", "innovation"]
https://analyticsindiamag.com/it-services/rise-autonomous-cars-india/
4
10
4
true
false
false
10,172,794
Indian BFSI Reinvents Risk Detection with AI-Driven Early Warning Systems
Non-performing assets (NPAs) have long plagued India’s banking and financial services sector. Traditional methods of credit risk management have often failed to detect early signs of borrower distress. As credit portfolios continue to grow in size and complexity, there’s also a critical need to consider automation and predictive intelligence, similar to many other sectors. AI-driven early warning systems (EWS) have started to transform risk management in the banking, financial services, and insurance (BFSI) sector by automating monitoring and enabling proactive action before defaults occur, benefiting the borrower. AI-Driven Financial Insights and Risk Management In conversation with AIM, Jaya Vaidhyanathan, CEO of BCT Digital, a AI-based risk management company, said, β€œWhile traditional systems may flag individual anomalies, AI models excel at connecting the dots across seemingly unrelated data points to detect early signs of credit stress.” Vaidhyanathan added that the data stream remains consistent when it comes to AI. However, an AI-driven EWS introduces a valuable layer of intelligence. This system not only processes large amounts of internal and external data but also identifies intricate patterns that are nearly impossible for humans to detect manually. Tarun Wig, co-founder and CEO of Innefu Labs, told AIM that post-COVID-19, customer financial behaviour has shifted dramatically, marked by digital-first interactions, multiple income streams and new spending patterns. β€œAI bridges this gap by ingesting real-time, high-frequency data instead of relying solely on static financial statements or past repayment records.” This enables early warning systems to continually learn and update risk profiles, identifying emerging stress signals much sooner than traditional models could. He believes that AI systems can detect early distress signals by utilising market-specific indicators, such as currency fluctuations, geopolitical news, regulatory changes, and sector developments. For example, disruptions in supply chains or industry downturns can be identified through news feeds and social sentiment analysis. By integrating these unconventional data points with core financial information, AI provides a more comprehensive view of creditworthiness, especially in volatile markets. Vaidhyanathan believes that banks are increasingly adopting streaming architectures while acknowledging their complexities. BCT Digital’s rt360 EWS is designed for flexibility, integrating both traditional methods, such as ETL, database links and flat files, and modern approaches, such as application programming interfaces (APIs) and streaming feeds. BCT Digital has developed a Real-Time Monitoring System (RTMS) to enhance low-latency alerting. This system enables near real-time data ingestion through APIs, bots, and streaming pipelines, which is essential for timely alerts, she added. The RTMS includes an expandable alert library for all bank portfolios with customisable thresholds. Moreover, it uses in-memory processing to detect suspicious transactions within milliseconds, facilitating immediate action and low-latency alerts. Encora, a digital product and software engineering provider, believes AI and machine learning are significantly reshaping traditional credit risk models, especially as consumer behaviours shift following COVID-19. Encora also partners with BFSI clients to develop scalable, AI-driven EWS and real-time data pipelines using cloud-native architectures. By leveraging industry-specific AI accelerators, we deliver explainable and regulatory-compliant models that are ready for production, ensuring effective and proactive decision-making, Chinmay Mhaskar, executive vice president at Encora, told AIM. Speaking about a real-time instance of EWS deployment at a prominent public sector bank, Vaidhyanathan said that BCT Digital implemented additional scenarios specifically designed to detect mule accounts, given the rising threat of fraudulent financial activity. Within just three months of rollout, the system successfully identified over 8,000 mule accounts using real-time data patterns and behaviour analysis. These accounts were immediately flagged and frozen in real time, helping the bank prevent potential financial losses and regulatory breaches. Encora uses AI to enhance customer insights and manage strategic risk effectively. Our solutions predict default and renewal risk using machine learning for behavioural modelling and by scoring churn based on policy and payment patterns. Mhaskar highlighted that the company uses natural language processing (NLP) to analyse digital interactions and understand customer behaviour. By integrating credit risk and portfolio management, Encora turns default risks into measurable credit exposure. Its offerings include pre-trained AI models, real-time MLOps pipelines for risk scoring, and a unified view of risk by merging CX/UX data with policy histories, supported by API interoperability between credit and insurance systems. Unstructured Data Struggles The rise of digital banking and neo-banks presents new opportunities, alongside challenges related to data velocity and complexity that traditional systems struggle to manage. Mhaskar pushes for AI-powered EWS to address issues such as unstructured data, fragmented ecosystems, and the need for real-time analytics, while also grappling with limited historical data and persistent concerns about data quality. Encora mitigates these challenges by developing AI-ready data mesh frameworks tailored to the fintech ecosystem and ensuring reliability through end-to-end MLOps orchestration. β€œWe co-develop real-time, AI-ready data mesh frameworks tailored to the fintech ecosystem. Our NLP and behavioural models extract insights from digital signals, such as frustration events or session drop-offs, including pre-built API connectors, thin-file credit scoring templates, and customisable EWS dashboards,” Mhaskar highlighted. Similarly, the rt360-EWS is built to ingest structured, semi-structured, and unstructured data, converting them into a unified format for streamlined processing. Vaidhyanathan said financial institutions function within complex and non-standard IT ecosystems, which exhibit varying levels of data maturity. Therefore, they have implemented a diversified data ingestion strategy tailored to each specific data type and use case. Tackling Other Challenges Wig added that ensuring fairness begins with the curation of diverse data across different geographies and demographics. Regular bias audits and fairness-aware algorithms help identify and reduce discrimination. Moreover, transparent governance and human reviews are essential to prevent automated decisions from disproportionately impacting any community and maintain ethical AI practices. Vaidhyanathan believes that transparency is crucial in regulated environments. BCT Digital’s EWS ensures that stakeholders understand the decision-making process by providing clear explanations for each alert and maintaining a detailed audit trail. β€œThis transparency allows credit officers to understand not just that an alert was raised, but why it was raisedβ€”building confidence in the system’s output and enabling better decision-making.” β€œAI-powered EWS offer transformative potential for risk management, but for traditional financial institutions, adoption comes with real-world complexities. Financial institutions aren’t lacking intent; they’re grappling with deeply entrenched barriers across people, process, and platform,” Mhaskar concluded.
β€œAI models excel at connecting the dots across seemingly unrelated data points to detect early signs of credit stress.”
["AI Features"]
["BFSI", "early warning systems"]
Smruthi Nadig
2025-07-03T12:37:43
2025
1,017
["machine learning", "TPU", "AI", "BFSI", "sentiment analysis", "ML", "MLOps", "early warning systems", "RAG", "NLP", "Aim", "analytics"]
["AI", "machine learning", "ML", "NLP", "analytics", "MLOps", "Aim", "RAG", "sentiment analysis", "TPU"]
https://analyticsindiamag.com/ai-features/indian-bfsi-reinvents-risk-detection-with-ai-driven-early-warning-systems/
3
10
3
true
true
false
10,040,142
Delhi Traffic? No Problem. Now, AI Will Show You The Way
Even with the improvement of public transportation and the growing concern over the environmental impact automobiles pose, cities have not seen a substantial decrease in congestionβ€”if at all. Traffic management remains a vital concern in city planningβ€”especially in Asia, which accounted for 6 of the top 10 cities with the worst traffic in 2020. We are constantly looking for better ways to handle congestion on the road. Now, technology has come to our rescue, with authorities increasingly making use of modern technologies like artificial intelligence, AR, and even blockchain to solve modern-day traffic problems. Smarter systems In 2018, drivers in Delhi spent around 58 percent more time stuck in traffic than drivers in any other city in the world. Finding a solution to Delhi’s growing congestion problem led the Ministry of Home Affairs to permit the Delhi Police to employ a new intelligent traffic management system (ITMS). Such systems use artificial intelligence (AI), machine learning (ML) and data analysis tools and apply them to their existing traffic infrastructure. Delhi’s proposed project uses over 7,500 CCTV cameras, automated traffic lights, and 1,000 LED signs carrying sensors and cameras installed in the city. The Delhi police will then use AI to process these into real-time insights, collect them on a cloud and make real-time decisions on balancing traffic flow, identifying vehicle number plates, and noticing traffic trends. Such systems can help cities plan a more effective way to curb heavy congestion. Moreover, researchers at the Nanyang Technological University (NTU) in Singapore developed an AI-powered intelligent routing algorithm that minimises congestion by simultaneously directing the routes of multiple vehicles. An algorithm of this kind would then suggest alternative routes to users in a way that would keep traffic low. Of course, such systems could also be tricky to implement since they would have to be taught to prioritise emergency vehicles over private ones and display specific routes to cyclists and buses. However, with the advancements in AI and machine learning, an algorithm that can differentiate the types of vehicles and tell vehicles and pedestrians apart does not seem far fetched. Finally, the implementation of ITMS would also have a positive environmental impact. In the United States, the Surtrac intelligent traffic signal control system was put up at 50 intersections (as of 2016) across Pittsburgh. The system curbed travel times by 26 percent, wait times at intersections by 41 percent and vehicle emissions by 21 percent. Thus, more efficient traffic management helps reduce harmful emissionsβ€”making AI a champion of both traffic control and greener solutions. A vision for the future Another innovation that has helped promote safe driving practices and manage traffic involves adopting Augmented Reality (AR) into existing systems. Smart car windshields could display essential informationβ€”with the help of technologies like AI and IoT systemsβ€”such as speed, ETA, possible obstacles ahead, distance and congestion on nearby roads in real-time. Using augmented reality in vehicles could help indicate unsafe driving practices like jumping a red light or going over the speed limit. This could help make driving safer and help manage congestions. That said, the market for AR in cars is new and unclear, with some experts estimating it to reach $14 billion by 2027 and some expecting a value of $673 billion by 2025. Another potential use of technology in traffic management is blockchain-based contracts that could allow improved transaction mechanisms for tolls or even petrol pumps. These could enable secure automated payments and reduce payment time and thus commute timeβ€”hence decreasing traffic. Multitudes of new technologies are used to solve problems we have not been able to solve traditionally. However, such technologies involve a great deal of risk, with many people fearing surveillance and privacy issues with increased use of facial recognition technology, CCTV footage or even the possibility of AR and IoT being used to transfer data to local authorities without explicit consent. All are valid concerns. However, with congestions projected to cost nearly $300 billion by 2030, something needs to be done to manage our existing traffic management systems.
Even with the improvement of public transportation and the growing concern over the environmental impact automobiles pose, cities have not seen a substantial decrease in congestionβ€”if at all. Traffic management remains a vital concern in city planningβ€”especially in Asia, which accounted for 6 of the top 10 cities with the worst traffic in 2020. We […]
["AI Features"]
[]
Mita Chaturvedi
2021-05-16T16:00:00
2021
669
["Go", "machine learning", "artificial intelligence", "programming_languages:R", "AI", "innovation", "ML", "programming_languages:Go", "ViT", "R"]
["AI", "artificial intelligence", "machine learning", "ML", "R", "Go", "ViT", "innovation", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/delhi-traffic-no-problem-now-ai-will-show-you-the-way/
4
10
1
false
false
true
49,102
Google Has A Chief Decision Scientist, Who Is She
When it comes to data science, there is a lot of confusion considering the job definitions. There are still many companies who don’t even have a clear idea of what kind of data science professionals they need to solve their business problems. While many aspirants and employers are still pondering over designations like data scientist and data engineer, there is one more job title that was even more obscure, but has now gained tremendous traction β€” the Decision Scientist. What Is Decision Science? Even though decision science is extensively similar to data science and even involves things like analytics, algorithms, machine learning and AI, the job of a decision scientist is more crucial. A data scientist is solely involved in extracting meaningful insights, but a decision scientist is more than that. A decision scientist possesses not only technical and mathematical knowledge but also a strong business knowledge, ability to effectively communicate with different stakeholders, ability to design and simplify poorly defined business problems. Simply put, this tribe of professionals are responsible for making some of the most crucial decisions throughout an organisation by turning data into context-specific, objective insights. Google’s Chief Decision Scientist In 2018, Google appointed Cassie Kozyrkov as the organisations Chief Decision Scientist. Kozyrkov over the years has served in various technical roles at Google, and now this illustrious personality is leading the search giants decision making. Kozyrkov (whose designation is definitely a mission statement) believes that artificial intelligence and big data analytics definitely makes human tasks easier. And at Google, she makes the best out of these techs using data and behavioural science along with human decision-making. The Google chief decision scientist over the years have guided more than 100 projects and designed Google’s analytics program. That is not all, she has personally trained over 20,000 Googlers in statistics, decision-making, and machine learning. Being the head of decision science, Kozyrkov prime mission is to democratise decision intelligence and safe, reliable AI. According to Kozyrkov, AI is just the extension of what humans have been striving for since ages and there’s nothing to be worried about AI would take all the human jobs. β€œIf you can do it better without a tool, why use the tool? And if you’re worried about computers being cognitively better than you, let me remind you that your pen and paper are better than you at remembering things. A bucket is better than any human at holding water, a calculator is better than a human at huge numbers,” said Kozyrkov at AI Summit-2019, London. Demand In India It is not just Google in the decision science arena, companies like Microsoft are Walmart are also on the list that believes that decision science is going to change the way business problems were solved. Bengaluru-based data analytics services company Mu Sigma is also a great example which shows that decision science is going to be a sought-after domain. Back in 2015 the company took a vital step and raised the compensation for the entry-level decision scientists. According to a report, the company’s new compensation structure would include a one-time salary advance of β‚Ή5 lakh for all the entry-level decision scientists who would successfully complete the training offered by Mu Sigma University. With time, every domain is evolving, and data science is no exception. To stay on top of the game, every data science professional would have to upskill according to the transformations. And as decision science has started to come into the picture, soon companies would realise the need for these bunch of professionals.
When it comes to data science, there is a lot of confusion considering the job definitions. There are still many companies who don’t even have a clear idea of what kind of data science professionals they need to solve their business problems. While many aspirants and employers are still pondering over designations like data scientist […]
["Global Tech"]
[]
Harshajit Sarmah
2019-10-30T18:00:18
2019
589
["big data", "data science", "Go", "machine learning", "artificial intelligence", "AI", "ViT", "analytics", "GAN", "R"]
["AI", "artificial intelligence", "machine learning", "data science", "analytics", "R", "Go", "big data", "GAN", "ViT"]
https://analyticsindiamag.com/global-tech/data-science-google-decision-scientist/
3
10
1
false
true
false
10,141,371
Perplexity’s Shopping Assistant is a Killer
California-based conversational search engine Perplexity is taking a step ahead with its new upgrade, introducing new shopping features that go beyond traditional search capabilities. CEO Aravind Srinivas believes Perplexity can be India’s AI app. His recent tweet suggested market potential for the search engine in India, which, if implemented, would have interesting vernacular use cases. While Perplexity doesn’t possess its own foundational LLM, the company asserts that it provides substantial value. It currently manages over 100 million queries weekly with the goal of scaling to 100 million queries daily. Srinivas has been bullish about expanding the company’s presence. In India, many founders and developers are already building on existing LLMs creating value at the application layer. ChatGPT Moment for Shopping? This comes after Perplexity launched a new shopping feature for its pro users in the US, where users can research and purchase products. The AI commerce experience, β€˜Buy with Pro’, lets users check out pick products from select merchants on the website or app, and place their order. Additionally, a visual search tool called β€˜Snap to Shop’ displays relevant products, only requiring shoppers to take a photo of the product they wish to purchase. Srinivas took to X to share how the platform is evolving from a research tool to one that is revolutionising commerce. β€œI don’t quite think it’s the ChatGPT moment for shopping. But I think the future is looking bright for customers to find and buy what they want much faster without ads and spam. Some more work needs to be done to feel true magic. We’re on it,” he said, highlighting his focus on removing ads and spam for a cleaner, user-friendly shopping experience. With this new feature, Perplexity competes with Google’s lens feature, Amazon’s Rufus assistant, and Walmart’s GenAI recommendation tool, but reportedly stands out by offering direct purchasing through its search engine. This strategy highlights the increasing adoption of generative AI to improve product discovery and streamline e-commerce transactions, and Perplexity is determined to stay ahead in this space. What’s Perplexity’s Moat? The new shopping feature comes amid intensifying competition in the search market, after OpenAI’s recent announcement of its β€˜SearchGPT’ integration into ChatGPT. The company has been steadily expanding its capabilities by adding new features like Perplexity Spaces, a finance analysis tool, an internal file search engine, and advanced reasoning mode. Perplexity has introduced innovative ways to streamline information during key events. During NVIDIA’s earnings call yesterday, β€˜Perplexity Finance’ offered live transcripts and key highlights, which will soon expand to major stocks. A similar real-time experiment was done for the US elections, where Perplexity partnered with The Associated Press and Democracy Works to create an election information hub that included live updates, real-time vote counts, and personalised ballot details. Srinivas expressed disappointment over traditional news websites lacking proper coverage. Perplexity is often touted as a GPT wrapper. Commenting on the nature of wrappers on the value addition aspect, Srinivas further said, β€œWrappers are at all levels; it’s just that they have given you so much value that you do not care.” Amazon chief Jeff Bezos has invested in Perplexity AI, showing his confidence in its potential to innovate AI search. Even NVIDIA CEO Jensen Huang praised the tool, revealing he uses it β€œalmost every day” for its practical benefits. Notably, Meta’s chief AI scientist, Yann LeCun, who strongly advocates ethical and moral AI practices, was involved in the company’s early funding rounds. Even at an event at Carnegie Mellon University, the host used Perplexity AI to create questions for Google CEO Sundar Pichai. On competing with players like Google today, Srinivas said, β€œEvery single query on Perplexity, on average, has 10-11 words. Every query on Google has about two to three words, so users have much higher intent with each query, allowing them to ask more targeted questions.” Expansion Mode On Reports suggest that Perplexity is also preparing to raise funds at a valuation of $9 billion, which would be its fourth round this year. This funding will help Perplexity expand into newer markets and fight its legal battles. In August, Perplexity signed a revenue-sharing deal with publishers like TIME, Der Spiegel, and Fortune after plagiarism allegations. It soon faced lawsuits from News Corp, which owns The Wall Street Journal and New York Post, for copyright violations, and The New York Times for AI scraping, intensifying financial pressures amid growing plagiarism disputes. Srinivas criticised News Corp’s lawsuit, calling it a counterproductive and unnecessary conflict between media and tech, urging collaboration to create innovative tools and expand business opportunities.
Gone are the days of traditional shopping as AI takes over.
["AI Features"]
["AI (Artificial Intelligence)"]
Aditi Suresh
2024-11-21T17:03:00
2024
755
["Go", "ChatGPT", "GenAI", "OpenAI", "AI", "AWS", "ML", "RAG", "generative AI", "R", "AI (Artificial Intelligence)"]
["AI", "ML", "generative AI", "GenAI", "ChatGPT", "OpenAI", "RAG", "AWS", "R", "Go"]
https://analyticsindiamag.com/ai-features/perplexitys-shopping-assistant-is-a-killer/
2
10
2
false
false
true
14,830
Big Data and Analytics is now being used in greening the planet
Microsoft is undertaking several projects dedicated to sustainability Microsoft has been making significant contributions in Tech for Good and has taken significant steps towards environment conservation. The company’s going green mantra is underscored by the $1.1 million in 2016, fundraising and 5,949 number of volunteering hours put in by its employees. But it doesn’t stop there. Microsoft’s ecosystem allows the firm, its employees, and the business partners to leverage new technologies for improving sustainability of their companies and communities. The Redmond giant recently tied up with The Nature Conservancy, a nonprofit to extend support for nonprofits globally. At Microsoft, big data is greening the planet Microsoft’s commitment towards nature is deeply rooted in the technologies it utilizes. Microsoft announced a $1 billion commitment to bring cloud computing resources to nonprofit organizations around the world. The firm donates near $2 million every day in products and services to nonprofits as a part of the commitment. Microsoft has extended its support to organizations like World Wildlife Fund, Rocky Mountain Institute, Carbon Disclosure Project, Wildlife Conservation Society, and the U.N. Framework Convention on Climate Change’s (UNFCCC) Climate Neutral Now initiative. Here are a slew of use cases How Prashant Gupta’s initiative is helping farmers in Andhra Pradesh increase revenue? Prashant Gupta works as a Cloud + Enterprise Principal Director at Microsoft. Gupta is undertaking significant developments for environment. Earlier, Gupta had facilitated a partnership for Microsoft with a United Nations agency, the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), Β and the Andhra Pradesh government. The project involved helping ground nut farmers cope with the drought. Gupta and his team leveraged advanced analytics and machine learning to launch a pilot program with a Personalized Village Advisory Dashboard for 4,000 farmers in 106 villages in Andhra Pradesh. It also included a Sowing App with 175 farmers in one district. Based on weather conditions, soil, and other indicators; the Sowing App advises farmers on the best time to sow. The Personalized Village Advisory Dashboard provides insights about soil health, fertilizer recommendations, and seven-day weather forecasts. Nature Conservancy’s Coastal Resilience program Microsoft’s Azure cloud platform for Nature Conservancy’s Coastal Resilience program: The Coastal Resilience is a public-private partnership led by The Nature Conservancy to help coastal communities address the devastating effects of climate change and natural disasters. The program has trained and helped over 100 communities globally about the uses and applications of the Microsoft’s Natural Solutions Toolkit. The toolkit contains a suite of geospatial tools and web apps for climate adaptation and resilience planning across land and sea environments. This has helped in strategizing for risk reduction, restoration, and resilience to safeguard local habitats, communities, and economies. Puget Sound: Puget Sound’s lowland river valleys is a treasure house, delivering valuable assets, a wealth of natural, agricultural, industrial, recreational, and health benefits to the four million people who live in the region. However, the communities are at increasing risk of flooding issues from rising sea levels, more extreme coastal storms, and more frequent river flooding. High winds hit Puget Sound The Conservancy’s Washington chapter is building a mapping tool as part of the Coastal Resilience toolkit to reduce the flow of polluted stormwater into Puget Sound. Emily Howe, an aquatic ecologist is in charge of the project, which revolves around developing the new Stormwater Infrastructure mapping tool. This tool will be eventually integrated into the Puget Sound Coastal Resilience tool set, that will be hosted on Azure. Furthermore, it will include a high-level heat map of stormwater pollution for the region, combining an overlay of pollution data with human and ecological data for prioritizing areas of concern. Data helps in Watershed Management: Today, around 1.7 billion people living in the world’s largest cities depend on water flowing from watersheds. However, estimates suggest that those sources of watershed will be tapped by up to two-thirds of the global population, by 2050. Kari Vigerstol, The Nature Conservancy’s Global Water Funds Director of Conservation had overseen development of a tool to provide them with better data. The project entailed assisting cities and protecting their local water sources. 4,000 cities were analyzed by β€œBeyond the Source”. The results stated that natural solutions can improve water quality for four out of five cities. Furthermore, The Natural Solutions Toolkit is being leveraged globally to better understand and protect water resources around the world. Through the water security toolkit, cities will be furnished with a more powerful set of tools. Users can also explore data, and access proven solutions and funding models utilizing the beta version of Protecting Water Atlas. This tool will help in improving water quality and supply for the future. Microsoft is illuminating these places with its innovative array of big data and analytics offerings Emily Howe In Finland, Microsoft partnered with CGI to develop a smarter transit system for the city of Helsinki. This data-driven initiative saw Microsoft utilize the city’s existing warehouse systems to create a cloud-based solution that could collate and analyse travel data. Helsinki’s bus team noticed a significant reduction in fuel costs and consumption, besides realizing increased travel safety, and improved driver performance. Microsoft Research Lab Asia designed a mapping tool, called Urban Air for the markets in China. The tool allows users to see, and even predict, air quality levels across 72 cities in China. The tool furnishes real-time, detailed air quality information, making use of Β big data and machine learning. Additionally, the tool combines a mobile app, which is used about three million times per day. Microsoft is implementing environmental strategies worldwide. The firm is assisting the city of Chicago in designing new ways to gather data. Additionally, the firm is also helping the city utilize predictive analytics in order to better address water, infrastructure, energy, and transportation challenges. Boston serves as another great instance where Microsoft is working to spread information about the variety of urban farming programs in Boston. Microsoft is also counting on the potential of AI and other technology to increase the impact for the city. Microsoft has also partnered with Athena Intelligence for developing the hill city of San Francisco. As a part of this partnership, Microsoft is leveraging Athena’s data processing and visualization platform to gather valuable data about land, food, water, and energy. This will help in improving local decision-making. Outlook Satya Nadella, CEO of Microsoft Data is not all that matters. At the end, it’s essentially about how cities can be empowered to take action based on that data. Microsoft has comprehensively supported the expansion of The Nature Conservancy’s innovative Natural Solutions Toolkit. The solution suite is already powering on-the-ground and in-the-water projects around the world, besides benefiting coastal communities, residents of the Puget Sound, and others globally. Microsoft is doing an excellent job, delivering on its promise to empower people and organizations globally to thrive in a resource-constrained world. The organization is empowering researchers, scientists and policy specialists at nonprofits by providing them with technology that addresses sustainability.
Microsoft has been making significant contributions in Tech for Good and has taken significant steps towards environment conservation. The company’s going green mantra is underscored by the $1.1 million in 2016, fundraising and 5,949 number of volunteering hours put in by its employees. But it doesn’t stop there. Microsoft’s ecosystem allows the firm, its employees, […]
["IT Services"]
["Azure cloud platform"]
Amit Paul Chowdhury
2017-05-09T12:26:05
2017
1,155
["Go", "machine learning", "AI", "cloud computing", "Azure", "R", "Azure cloud platform", "RAG", "Ray", "analytics", "predictive analytics"]
["AI", "machine learning", "analytics", "Ray", "RAG", "predictive analytics", "cloud computing", "Azure", "R", "Go"]
https://analyticsindiamag.com/it-services/big-data-analytics-now-used-greening-planet/
3
10
3
false
false
true
10,005,992
Visualizations With SandDance Using Visual Studio Code
In the past we have seen many visualization tools like PowerBI, Tableau, Salesforce, Splunk, etc. and lots of libraries like matplotlib, plotly, ggplot, bamboolib, etc., but how many of us have seen a code editor helping us with visualizations without having to code? Interesting right? This is possible by using the SandDance extension in Visual Studio Code. SandDance is an extension that helps us visualize our data, drill down by filtering and it can also generate 3d graphs by a single click. Let us see how we can have to get started with SandDance on Visual Studio Code. This article will cover: RequirementsData transformationsExplanation of the datasetLoading dataset and viewing it with SandDanceVisualizations and insightsΒ Conclusion Requirements: Visual Studio CodePython extension in Visual Studio CodeSandDance extension in Visual Studio CodeTitanic dataset Data transformations Survived (0 = No, 1 = Yes)PClass (1 = 1st class, 2 = 2nd class, 3 = 3rd class) Explanation of the dataset PassengerId – unique id for every passengerSurvived – did the passenger survive after the accident of not (Yes/No)PClass – information on the passenger class (1st, 2nd or 3rd class)Name – name of the passengersSex – gender of the passengerAge – ageSibSp – number of siblings/spouses aboardParch – number of parents/children aboardTicket number – ticket numberCabin number – cabin numberEmbarked – point of embarkation (C = Cherbourg, Q= Queenstown, S = Southampton) Loading dataset and viewing it with SandDance: File -> Open file… -> Navigate & select titanic datasetOnce the dataset is loaded, right click the dataset file, and look for β€œView in SandDance”. Visualizations and insights: When you view the dataset using SandDance, this is how it will look like. Before we start with any visualizations, let’s what do all the icons on the page mean. From figure 1 we can understand that there were more men on the ship as compared to women but the thing to notice is the survival ratio of female was higher than men. Let’s dig deeper and see what else can be understood. Figure 1: Column chart for Sex By isolating the female column and dividing it further based on Passenger Class (PClass), from figure 2 we can see that about 50% of the women travelling on 3rd class died whereas most of the women travelling on 1st and 2nd class survived. Figure 2: Column chart for females based on PClass Now let’s add one more layer of detail with the select tab and highlighting females below the age of 18. Figure 3 shows that maximum number of females below the age of 18 were travelling in the 3rd class and their ratio of death & survival is similar. Figure 3: Overview of females below 18 Figure 4 shows us an overview of the passengers who boarded from Cherbourg, Queenstown and Southampton which is a result of faceting the column chart of sex by embarked. From figure 4 we can see that most of the passengers boarded the ship from Southampton and looking closely we could identify that around 75% of the men who boarded Titanic from Southampton, died. On the other hand, all most all men who boarded the ship from Queenstown died in the accident. Figure 4: Faceting column chart of sex based on embarked Figure 5 gives us the information of the passenger class of the people who survived. Let’s take a closer look at each graph separately. The colors tell us the that in the 1st class most of the people embarked from Cherbourg and Southampton on the other hand the 2nd class is crowded by people from Southampton. The third class looks like a mix of people from all 3 locations Figure 5: Column chart of sex faceted by PClass Observing figure 6 we can find some anomalies related to the fare that people have paid to get into different classes. Have a look at the region encircled in the figure. The passenger has paid a very low fare still he got into the first class. If you click on that cell and look up his name on the internet, you’ll come to know that he wasn’t satisfied with his ticket hence the crew upgraded him. Figure 6: Tree map of PClass based on fare Figure 7 shows a 3d graph of people in the first class with the Z-axis as fare paid by people. It’s interesting to note that, on an average people who embarked from Cherbourg have paid more for the first-class ticket in comparison to the others. Figure 7: 3d graph of the fare paid by 1st class people Conclusion EDA is a very crucial part of the data science pipeline and one should always use tools that provide a lot of functionality with less stress on coding. Better & quicker visualizations lead to efficient decision making. One of the major benefits of using SandDance is how easy it is to drill down to a focused view of every graph and the ability to isolate parts of the graphs for further analysis.
In the past we have seen many visualization tools like PowerBI, Tableau, Salesforce, Splunk, etc. and lots of libraries like matplotlib, plotly, ggplot, bamboolib, etc., but how many of us have seen a code editor helping us with visualizations without having to code? Interesting right? This is possible by using the SandDance extension in Visual […]
["Deep Tech"]
["apm data science"]
Rithwik Chhugani
2020-09-03T18:00:31
2020
832
["data science", "Go", "Plotly", "AI", "ETL", "RAG", "Python", "apm data science", "programming_languages:Python", "Matplotlib", "R"]
["AI", "data science", "Matplotlib", "Plotly", "RAG", "Python", "R", "Go", "ETL", "programming_languages:Python"]
https://analyticsindiamag.com/deep-tech/visualizations-with-sanddance-using-visual-studio-code/
3
10
0
true
false
false
6,542
Machine Learning For Better And More Efficient Solar PowerΒ Plants
Machine learning techniques support better solar power plant forecasting. Machine learning techniques play a crucial role in deciding where to build a plant when accurate or limited location data is available. Machine learning techniques help maintain smart grid stability. The global solar photovoltaic (PV) installed capacity in 2013 was 138.9 GW and it is expected to grow to over 455 GW by 2020. However,Β solar power plants still have a number of limitations that prevent it from being used on a larger scale.Β One limitation is that the power generation cannot be fully controlled or planned for in advance since the energy output from solar power plants is variable and prone to fluctuations dependent on the intensity of solar radiation, cloud cover and other factors. Another important limitation is that solar energy is only available during the day and batteries are still not an economically viable storage option making careful management of energy generation necessary. Additionally, as the installed capacity of solar power plants grows and plants are increasingly installed at remote locations where location data is not readily available, it is becoming necessary to determine their optimal sizes, locations and configurations using other methods. Machine learning techniques provide solutions that have been more successful in addressing these challenges than manually developed specialized models. Accurate forecasts of solar power production are a necessary factor in making the renewable energy technology a cost-effective and viable energy source. Machine learning techniques can correctly forecast solar power plant generation at a better rate than current specialized solar forecasting methods.Β In a study conducted by Sharma et al, multiple regression techniques including least-square support vector machines (SVM) using multiple kernel functions were used in the comparison with other models to develop a site specific prediction model for solar power generation based on weather parameters. Experimental results showed that the SVM model outperformed the others with up to 27 percent more accuracy. Furthermore, machine learning techniques play a crucial role in assisting decision making steps regarding the plants location selection and orientation selection as solar panels need to be faced according to solar irradiation to absorb the optimal energy. Conventional methods for sizing PV plants have generally been used for locations where the required weather data (irradiation, temperature, etc.) and other information concerning the site is readily available. However, these methods cannot be used for sizing PV systems in remote areas where the required data are not available, and thus machine learning techniques are needed to be employed for estimation purposes.Β In a study conducted by Mellit et al., an artificial neural network (ANN) model was developed for estimating sizing parameters of stand-alone PV systems. In this model, the inputs are the latitude and longitude of the site, while the outputs are two hybrid-sizing parameters. In the proposed model, the relative error with respect to actual data does not exceed 6 percent, thus providing accurate predictions. This model has been evaluated on 16 different sites and experimental results indicated that prediction error ranges from 3.75-5.95 percent with respect to the sizing parameters. Additionally, metaheuristic search algorithms address plan location optimization problems by providing improved local searches under the assumption of a geometric pattern for the field. Lastly, to maintain grid stability, it is necessary to forecast both short term and medium term demand for a power grid with renewable energy sources contributing a considerable proportion of energy supply. The MIRABEL system offers forecasting models which target flexibilities in energy supply and demand, to help manage the production and consumption in the smart grid. The forecasting model combines widely adopted algorithms like SVM and ensemble learners. The forecasting model can also efficiently process new energy measurements to detect changes in the upcoming energy production or consumption. It also employs different models for different time scales in order to better manage the demand and supply depending on the time domain. Ultimately, machine learning techniques support better operations and management of solar power plants.
Machine learning techniques support better solar power plant forecasting. Machine learning techniques play a crucial role in deciding where to build a plant when accurate or limited location data is available. Machine learning techniques help maintain smart grid stability. The global solar photovoltaic (PV) installed capacity in 2013 was 138.9 GW and it is expected […]
["IT Services"]
[]
AIM Media House
2014-12-11T17:16:27
2014
656
["Go", "TPU", "machine learning", "programming_languages:R", "AI", "neural network", "programming_languages:Go", "Git", "RAG", "R"]
["AI", "machine learning", "neural network", "RAG", "TPU", "R", "Go", "Git", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/it-services/machine-learning-better-efficient-solar-power-plants/
4
10
1
true
false
true
53,705
How Legal Sports Betting Industry Can Win The Gamble With Artificial Intelligence
Those days are almost gone when one placed a bet on their favourite team hiding from authorities β€” sports betting is legal now in many parts of the world. While betting, whether one loves their team or a player or places a bet purely hating the opposing team, one needs to have some knowledge aka in technical terms, some data.Β The sports betting industry is turning to Artificial Intelligence with this data. Legal sports betting around the world involves working with a lot of information and data collection. Various sports leagues around the globe give bookmakers this data to come up with better products to enhance legal betting. And to improve the field of legal sports betting with massive data, no technology could work better than Artificial Intelligence. Artificial intelligence and machine learning are helping in predicting the patterns in a sport, but, for AI to work significantly well, this sport has to be predictive, and should follow a particular set of rules. For instance, football, which has a specific set of rules with short duration and gets repeatable, can be used for the AI model, where over a lakh of videos of the games were put through the algorithm for one to see patterns that can be predicted by AI. The real effect of the technology is felt when it provides these insights in real-time, which might impact on the significant factors when it comes to betting. What Kind Of Data Is Required? Anybody working with AI and ML knows what an algorithm is β€” a mathematical formula that organises and evaluates data to solve a complex problem. In legal sports betting the AI makes use of the player statistics and team information in predicting the possible outcome. For example, in a sport like the NBA, stats like field goals; 3-pointers; free throws; the number of rebounds, assists, steals, blocks and turnovers; and game scores from past seasons are used as the data for these algorithms. With advanced analytical tools, AI can revolutionise the way one sees betting. Study of sports betting algorithm and AI is still in the early stages, but companies like Stratagem, Winnerodds, StatsPerform are continuously carrying out research associated with AI and sports betting. Problem With AI Algorithms In Sports Betting Although AI has too much potential when it comes to sports betting. However, gambling and betting are something new for AI, and therefore it does encounter some problems. Humans Are Needed No matter what insights are given by AI and machine learning, human analysis is always required to analyse these insights provided by the system in an ongoing game. Also, because of the unpredictable nature of sports, human instinct plays a huge role in interpreting these data correctly. Problem With Prediction AI always provides insights based on the data fed on to the algorithm, but if an unfortunate and sudden scenario occurs where the star player gets injured but continues to play through it, might result in the team’s losing record. The AI’s algorithm will never take the impact of a star player into account and might mispredict the result for the gamblers. Also, no AI is capable of predicting the momentum shift of the game. Although AI can provide beneficial real-time insights of an ongoing game, it hugely misses the unfortunate shift of game where the losing team becomes the winner at the end. The Starting Lineup One of the crucial factors of betting has always been the starting lineup. If a gambler gets access to these starting lineup information before the game, it can prove to be immensely valuable for betting. And AI obviously doesn’t have these β€˜inside connections’ to get the starting lineup, that’s one advantage the bookmakers will always hold over the AI when it comes to sports betting. Outlook Artificial Intelligence has been impacting everything around us and has also gone through several criticisms. But, over the years, it has delivered on most of its promises in varied industries like healthcare, finances etc. With AI starting to impact the sports betting industry, it is promising a legal and more accessible way of betting through the Sportsbooks, which is a billion-dollar software industry allowing users to bet from their mobiles and laptops safely. And with such scenario in hand, soon AI will close the gap that exists between gambling and investing.
Those days are almost gone when one placed a bet on their favourite team hiding from authorities β€” sports betting is legal now in many parts of the world. While betting, whether one loves their team or a player or places a bet purely hating the opposing team, one needs to have some knowledge aka […]
["AI Features"]
[]
Sameer Balaganur
2020-01-13T16:20:36
2020
721
["Go", "machine learning", "artificial intelligence", "ELT", "AI", "programming_languages:R", "ML", "programming_languages:Go", "GAN", "R"]
["AI", "artificial intelligence", "machine learning", "ML", "R", "Go", "ELT", "GAN", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/how-legal-sports-betting-industry-can-win-gamble-with-artificial-intelligence/
3
10
1
false
true
true
15,464
Online learning gets personal with Great Lakes personalized Business Analytics Certificate Program
To state the importance of data and analytics, it’s best to put it this way. It regularly features in the most wanted skillset and with the current IT landscape in a flux, upskilling with a business analytics course has become the safest way to stay relevant in the ever-evolving technology sector. But in the current environment, working professionals have to step away from their roles to dive deep into online analytics courses that lend value and professional development. MOOC – classes for masses lack outcome One of the most popular format of e-learning is MOOC platforms [Massive Open Online Courses], popularized by Coursera and edX, and have a sky high low completion rate and low engagement. MOOCs are self-paced and working professionals lack the motivation to reach the finish line. Even upon successful course completion/participation, there is no accredited diploma. According to an OpenCred study, these [MOOC] certificates are intended to be more a β€œmemento” than a credential. Another study on MOOC trends by Organisation for Economic Co-operation and Development cites one of the most distinct features of MOOC is that it fails to achieve better learning outcomes. For example, the absence of an instructor or other type of support in case of questions on course material is cited as one of the main reasons for students’ dropout. Online course with a difference Bearing this in mind, Great Learning, in association with Great Lakes Institute of Management, a top ranking B-school has launched Business Analytics Certificate Program, a personalized six-month online data analytics certification, custom built by a highly-experienced faculty and industry professionals. One of the most respected B-schools, Great Lakes analytics courses consistently rank in the top 10 by Analytics India Magazine. To understand the USP of personalized analytics education better, AIM spoke to Arjun Nair, Director – Β Learning Experience at Great Learning. β€œOur analytics programs have been consistently ranked #1 in the country over the last 3 years. These programs, including BACP, are delivered by our world-class faculty members, who are able to blend academic rigor with industry relevance. Years of research and development has gone into creating a highly impactful curriculum and supporting learning material that will enable you to master these topics and have a delightful learning experience,” Nair emphasized. Is e-learning in tech age successful? To build the workforce of the future, the training ecosystem has to be revamped. While most learning systems have transitioned to a hybrid model, training or simply put, retraining people requires a personalized, nuanced approach, cites the Pew report. Online training modules are self-paced and some learners may not have the interest to continue or complete the course. MOOCs are peddled as nanodegrees and the course content is broken down into short segments. While the content is definitely fast-moving in this space, the micro-courses do not offer a premium learning experience, lack the rigour of assessments and the Certificate of Completions have been questioned for their credibility. A Babson Survey Research Group cites only 29.1 percent of academic leaders β€œaccept the value and legitimacy of online education. In other words, MOOCs haven’t made much headway in adding to the talent pool. Here’s how Great Lakes’s BACP is unlike a regular online course: BACP is a career-oriented program that focuses on teaching business analytics with a deep impact through personalized mentorship from industry experts The program features an exhaustive, in-depth framework and covers all critical aspects of business analytics in a structured learning framework The course is helmed by world class faculty and industry experts and learners will get trained by India’s most celebrated academicians. In fact, two faculty members feature in AIM’s Top 10 Analytics Academicians in India 2017 list The program features micro classes and personalized mentorship that provides an interactive setting and encourages more progress The program takes a hands-on approach to teaching analytics and equips learners with business analytics and modelling skills using Microsoft Excel and R. In addition to practical assignments, each module offers interaction with mentor and industry guest speakers. The course is characterized by project driven learning that enables students Β to learn how data is used to make business decisions To shed more light on the six-month program, learners are divided into a cohort of five based on their years of work experience or the domain they come from. Personalized mentorship ensures no learner gets left behind and the program objectives are met. Some of the key highlights of BACP are that the course is delivered in a structured learning format, enables personalized learning and learners can build their employability profile under the guidance of a mentor. Moreover, it is a perfect blend of applied learning and analytics training and is geared at graduates, early or mid-career professionals who plan to advance up the job ladder. Hard Facts The course is divided into six modules and covers 150 hours of learning over a period of six months. The curriculum features some of the most widely used tools and techniques in the industry such as advanced statistics, R, Machine Learning and forecasting techniques. Β The course is backed by six experiential learning projects that aim to strengthen analytical skills in various domains such as Finance, Marketing, Supply Chain, HealthCare, Policy Analysis et al. The idea is that students, working in teams of five, are encouraged to solve real-world data analytics cases under the guidance of a mentor. Concurrently, students can also apply their learnings in a different domain, thereby gaining cross-disciplinary business understanding. Besides a great career support, (career enhancement sessions with industry experts, resume building exercise) students can also tap into Great Lakes alumni network spread across the globe and get insights on how to maximize learning and build a path-breaking career. Another key takeaway from the program is that learners receive personalized education, led by reputed faculty members and can benefit from personalized mentorship that is definitely more impactful. Admissions are open. To apply click here
To state the importance of data and analytics, it’s best to put it this way. It regularly features in the most wanted skillset and with the current IT landscape in a flux, upskilling with a business analytics course has become the safest way to stay relevant in the ever-evolving technology sector. But in the current […]
["AI Trends"]
["Business Analytics"]
Richa Bhatia
2017-06-06T09:28:03
2017
982
["Go", "machine learning", "programming_languages:R", "AI", "Git", "RAG", "Aim", "analytics", "Business Analytics", "GAN", "R"]
["AI", "machine learning", "analytics", "Aim", "RAG", "R", "Go", "Git", "GAN", "programming_languages:R"]
https://analyticsindiamag.com/ai-trends/online-learning-gets-personal-great-lakes-personalized-business-analytics-certificate-program/
2
10
3
true
true
true
10,116,706
HCLTech and CAST Expand Partnership to Offer Customised Chips to OEMs
HCLTech, a leading global technology company, and Computer Aided Software Technologies, Inc. (CAST), a semiconductor intellectual property (IP) cores provider, announced plans to scale their partnership to offer customised chips to enable original equipment manufacturers (OEMs) across industries accelerate their digital transformation and automation journeys. HCLTech will enhance design verification, emulation and rapid prototyping of its turnkey system-on-chip (SoC) solutions by leveraging silicon-proven IP cores and controllers from CAST. This will help OEMs in varied industries including automotive, consumer electronics and logistics, to significantly reduce engineering risk and development costs. β€œCAST shares our vision for innovative, industry-leading electronic systems design. Their high-quality and well-supported IP cores, coupled with HCLTech’s system integration design expertise, will enable us to deliver superior custom chips to our customers worldwide,” said Vijay Guntur, President, Engineering and R&D Services, HCLTech. β€œLike CAST, HCLTech has a decades-long heritage of delivering superior semiconductor SoC solutions to their customers and partners. We look forward to working together with HCLTech and enhancing the reliability, efficiency and user-friendly nature of semiconductor SoCs,” said Nikos Zervas, CEO at CAST. CAST is a silicon IP provider founded in 1993. CAST’s ASIC and FPGA IP product line includes microcontrollers and processors; compression engines for data, images, and video; interfaces for automotive, aerospace, and other applications; various common peripheral devices; and comprehensive SoC security modules.
This will help OEMs in varied industries including automotive, consumer electronics and logistics, to significantly reduce engineering risk and development costs.
["AI News"]
["HCL Technology"]
Pritam Bordoloi
2024-03-19T12:50:10
2024
220
["API", "programming_languages:R", "AI", "digital transformation", "Git", "RAG", "automation", "HCL Technology", "R"]
["AI", "RAG", "R", "Git", "API", "digital transformation", "automation", "programming_languages:R"]
https://analyticsindiamag.com/ai-news-updates/hcltech-and-cast-expand-partnership-to-offer-customised-chips-to-oems/
2
8
1
false
false
false
10,004,909
The Journey Of Computer Vision To Healthcare Industry
Artificial intelligence is becoming a part of every conversation that we have today. One of the important subfields of AI, computer vision has recently exploded in terms of advances and use cases. Akshit Priyesh, who is a data scientist at Capgemini took through an interesting journey of how the research in computer vision has evolved over the years and has now become a prominent part of the healthcare industry. He was addressing the attendees at CVDC 2020, the virtual computer vision developer summit. The Evolution Of Computer Vision Priyesh shared how one of the papers titled β€˜Receptive fields of single neurones in the cat’s striate cortex’ by D. H. Hubel and T. N. Wiesel marked the foundation for the development that we see today in computer vision. While experimenting with an anesthetised cat and the response of its neurons to various images that were being displayed, the researchers accidentally discovered that the neurons were activated by looking at the line that showed while changing images on the projector. It was this research that led to the discovery that human brains perceive images as edges, curves and lines. Following this instance, there have been many pieces of research that have established the fact that the visual processing capabilities of humans start with simple structures. One particular research by David Marr titled, β€˜Vision: A Computational Investigation into the Human Representation and Processing of Visual Information’ further studied visual perception and established that vision is hierarchical and that it culminates with a description of three-dimensional objects in the surrounding environment. Priyesh said that while it was groundbreaking at that time but it did not explain the mathematics or calculations behind it. Since then, computer vision has come a long way to be used in various fields such as self-driving cars, facial recognition, retail industry and more. Of which, the healthcare industry has recently begun to witness important use cases. Computer Vision In Healthcare Industry One of the emerging AI fields today is computer vision, which can potentially support many different applications delivering life-saving functionalities for patients. Computer vision is today assisting an increasing number of doctors in diagnosing their patients better, monitoring the evolution of diseases, and prescribing the right treatments. It is not just saving time in routine tasks but is being used to train computers to replicate human sight to understand objects in front of it. Priyesh shared that currently, the most widespread use cases for computer vision and healthcare are related to the field of radiology and imaging. AI-powered solutions are finding increasing support among doctors because of their diagnosis of diseases and conditions from various scans such as X-ray, and MR, or CT. It is also being used to measure blood loss during surgery, e.g. during C-section procedures, measure body fat percentage, and more. Some of the use cases in the healthcare industry are: Precise diagnosis: Computer vision has been extensively used to offer a precise diagnosis of diseases such as cancer and minimise instances of false positives.Β Timely detection of illness: Many fatal diseases such as cancer need to be diagnosed at an early stage to increase the chances of survival of the patient. Computer vision has been extensively used to timely detect these diseases.Β Heightened medical process: Use of computer vision can considerably reduce the time that doctors usually take in analysing reports and images.Β Medical imaging: Computer vision-enabled medical imaging has become quite popular over the years and has proved to be trust-worthy to detect diseases.Health monitoring: It has also been used by doctors to analyse health and fitness metrics of patients to make faster and better medical decisions.Nuclear medicine: A part of clinical medicine, nuclear medicine deals with the use of radionuclide pharmaceuticals in diagnosis. Computer vision has been explored in this field too. Priyesh shared that in the current times of the COVID pandemic, computer vision is being used to detect the disease and explore potential treatment for the deadly virus. He along with his team at Capgemini have even developed a chatbot that detects COVID-positive patients. Based on the user inputs, it detects the probability of being infected β€” using computer vision.
Artificial intelligence is becoming a part of every conversation that we have today. One of the important subfields of AI, computer vision has recently exploded in terms of advances and use cases. Akshit Priyesh, who is a data scientist at Capgemini took through an interesting journey of how the research in computer vision has evolved […]
["AI Features"]
["Computer Vision"]
Srishti Deoras
2020-08-15T16:00:07
2020
689
["Replicate", "artificial intelligence", "programming_languages:R", "AI", "computer vision", "Ray", "llm_models:Gemini", "Computer Vision", "Rust", "R", "programming_languages:Rust"]
["AI", "artificial intelligence", "computer vision", "Ray", "R", "Rust", "Replicate", "llm_models:Gemini", "programming_languages:R", "programming_languages:Rust"]
https://analyticsindiamag.com/ai-features/the-journey-of-computer-vision-to-healthcare-industry/
2
10
1
true
false
true
17,693
Asset Management Is Being Completely Disrupted By Data Science. Here’s How.
The financial services industry has always been working with large volumes of data and when it comes to asset management, the data volume increases multi-fold. The last decade has witnessed massive growth in the financial services industry in terms of data analytics technologies. While the early algorithms used structured data only, modern machine learning based solutions can yield insights even from highly unstructured records. Moreover, sentiment analysis and image recognition have now been employed to assume potential peaks and valleys in the stock market. For example, collecting and analysing social media trends around brands helps the trader foresee whether a company’s stock prices will rise or fall. Despite the changing trend, traditional wealth management companies continue to remain late adopters of the technologies and are still seeking ways to become data-driven. Here are the main operations that can be enhanced with a data-driven approach. Data-driven asset management: 1. Smart advisors (or robo-advisors): These advisors have been around for almost a decade and have now become the hottest personalisation trend in the financial management industry. The algorithms consider various customer data – risk tolerance, behaviour, legal benchmarks, preferences – and make recommendations based on this data. By combining multiple data sources, one can increase the dimensionality of models and solve complex optimisation problems that account for hundreds of individual portfolio factors. This allows portfolio managers to suggest tailored investment plans to clients in both B2B and B2C operations. 2. Fraud detection powered by neural networks: Another emerging trend in financial management are anti-money laundering and fraud-detection models that are powered by neural networks and help in identifying any suspicious activities. The system is trained and developed in a way that it can track and assess the behaviour of all the individuals involved in the process. The systems use and apply deep neural networks to detect any fraud by analysing both structured and unstructured data that include all kinds of online footprints. The strong neural networks efficiently detect any implicit link between the customer and any potential fraud. 3. Predictive analytics: Predictive analytics uses historical data to determine the relationships of data with outputs and build models to check against current data. Stocks, bonds, futures, options, and rates movements form the stream of billions of deal records every day, which make for non-stationary time series data. These often become complex problems for financial analysts because conventional statistical methods fall short both in terms of prediction accuracy and speed. There are three approaches to combat these data. Machine learning methods: Models are trained on short-term historical data and yield predictions based on it. Stream learning: A predictive model is continuously updated by every new inbound record, which provides better accuracy. Ensemble models: Multiple machine learning models analyse incoming data, and the predictions are based on consolidated forecasting results. 4. Scenario-based analytics: The method lets financial managers to analyse possible future events by considering alternative possible outcomes. Instead of showing just one exact picture, it presents several alternative future developments. Computing power and new data processing packages have made building stress models for company operations and stock market performance possible. Β With this method, one can test millions of scenarios accounting for hundreds of unique market conditions. Why must asset managers start adopting technology? There has been much talk about money managers being slow in adopting technology for asset management. Upgrading to digitisation will weaken the risk of these players to lose market share to the digitally savvy businesses that are aiming to disrupt the investment industry. According to a poll conducted by Create Research, out of the 458 asset and wealth managers, only 27 per cent of wealth managers offer robo-advisers, and only 31 per cent use big data. The asset management industry’s need to modernise comes as it is grappling with pressures ranging from tougher regulation to stronger competition. The other reason that should also be considered for making the shift are the millennials. They are not just digitally savvy but are also potentially rich. Just to give a sense, the millennials will soon make for the largest part of the workforce and also stand a strong chance to inherit ancestral wealth, which could approximately be $15tn in the U.S. and $12tn in Europe over the next 15-20 years, Create Research said. With all that money and digital savviness, financial advisors should equip themselves to stand a chance in the growing competition. Conclusion: Adopting data science solutions for wealth management is not new in the financial market. However, wealth management organisations have continued to be the late adopters of data-driven technologies. Yet, there is no denial to the fact that industry leaders have been the first ones to adopt these technologies and have set a benchmark for the others to meet. The data science technologies for wealth management is the next big thing into the field. These technologies have the capability to intensify the interest in semantic analysis, ML-based time series forecasting, and even scenario-based modelling. Due to a fairly late transformation as compared to the financial services industry in general, the smart move today will be to seek a partnership among tech consultancies and fin-tech start-ups to avoid reinventing the wheel.
The financial services industry has always been working with large volumes of data and when it comes to asset management, the data volume increases multi-fold. The last decade has witnessed massive growth in the financial services industry in terms of data analytics technologies. While the early algorithms used structured data only, modern machine learning based […]
["IT Services"]
["digitisation"]
Priya Singh
2017-09-13T09:03:18
2017
859
["data science", "machine learning", "AI", "neural network", "digitisation", "ML", "sentiment analysis", "Aim", "analytics", "predictive analytics", "fraud detection"]
["AI", "machine learning", "ML", "neural network", "data science", "analytics", "Aim", "predictive analytics", "fraud detection", "sentiment analysis"]
https://analyticsindiamag.com/it-services/asset-management-completely-disrupted-data-science-heres/
3
10
3
false
true
true
67,872
Hands-On Guide to Predict Fake News Using Logistic Regression, SVM and Naive Bayes Methods
There are more than millions of news contents published on the internet every day. If we include the tweets from twitter, then this figure will be increased in multiples. Nowadays, the internet is becoming the biggest source of spreading fake news. A mechanism is required to identify fake news published on the internet so that the readers can be warned accordingly. Some researchers have proposed the methods to identify fake news by analyzing the text data of the news based on the machine learning techniques. Here, we will also discuss the machine learning techniques that can identify fake news correctly. In this article, we will train the machine learning classifiers to predict whether given news is real news or fake news. For this task, we will train three popular classification algorithms – Logistics Regression, Support Vector Classifier and the Naive-Bayes to predict the fake news. After evaluating the performance of all three algorithms, we will conclude which among these three is the best in the task. The Data Set The dataset used in this article is taken from Kaggle that is publically available as the Fake and real news dataset. This data set has two CSV files containing true and fake news. Each having Title, text, subject and date attributes. There are 21417 true news data and 23481 fake news data given in the true and fake CSV files respectively. To train the model for classification, we will add a column in both the datasets as real or fake. First, we will import all the required libraries. #Importing Libraries import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.metrics import accuracy_score, confusion_matrix,classification_report from sklearn.linear_model import LogisticRegression from sklearn.svm import LinearSVC from sklearn.naive_bayes import MultinomialNB After importing the libraries, we will read the CSV files in the program. #Reading CSV files true = pd.read_csv("True.csv") fake = pd.read_csv("Fake.csv") Here, we will add fake and true labels as the target attribute with both the datasets and create our main data set that combines both fake and real datasets. #Specifying fake and realfake['target'] = 'fake'real['target'] = 'true'#News datasetnews = pd.concat([fake, true]).reset_index(drop = True)news.head() After specifying the main dataset, we will define the train and test data set by splitting the main data set. We have kept 20% of the data for testing the classifiers. This can be adjusted accordingly. #Train-test split x_train,x_test,y_train,y_test = train_test_split(news['text'], news.target, test_size=0.2, random_state=1) In the next step, we will classify the news texts as fake or true using classification algorithms. We will perform this classification using three algorithms one by one. First, we will obtain the term frequencies and count vectorizer that will be included as input attributes for the classification model and the target attribute that we have defined above will work as the output attribute. To bind the count vectorizer, TF-IDF and classification model together, the concept of the pipeline is used. A machine learning pipeline is used to help automate machine learning workflows. They operate by enabling a sequence of data to be transformed and correlated together in a model that can be tested and evaluated to achieve an outcome, whether positive or negative. In the first step, we will classify the news text using the Logistic Regression model and evaluate its performance using evaluation matrices. #Logistic regression classificationpipe1 = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('model', LogisticRegression())])model_lr = pipe1.fit(x_train, y_train)lr_pred = model_lr.predict(x_test)print("Accuracy of Logistic Regression Classifier: {}%".format(round(accuracy_score(y_test, lr_pred)*100,2)))print("\nConfusion Matrix of Logistic Regression Classifier:\n")print(confusion_matrix(y_test, lr_pred))print("\nCLassification Report of Logistic Regression Classifier:\n")print(classification_report(y_test, lr_pred)) After performing the classification using the logistic regression model, we will classify the news text using the Support Vector Classifier model and evaluate its performance using evaluation matrices. #Support Vector classificationpipe2 = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('model', LinearSVC())])model_svc = pipe2.fit(x_train, y_train)svc_pred = model_svc.predict(x_test)print("Accuracy of SVM Classifier: {}%".format(round(accuracy_score(y_test, svc_pred)*100,2)))print("\nConfusion Matrix of SVM Classifier:\n")print(confusion_matrix(y_test, svc_pred))print("\nClassification Report of SVM Classifier:\n")print(classification_report(y_test, svc_pred)) Finally, we will classify the news text using the Naive Bayes Classifier model and evaluate its performance using evaluation matrices. #Naive-Bayes classificationpipe3 = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('model', MultinomialNB())])model_nb = pipe3.fit(x_train, y_train)nb_pred = model_nb.predict(x_test)print("Accuracy of Naive Bayes Classifier: {}%".format(round(accuracy_score(y_test, nb_pred)*100,2)))print("\nConfusion Matrix of Naive Bayes Classifier:\n")print(confusion_matrix(y_test, nb_pred))print("\nClassification Report of Naive Bayes Classifier:\n")print(classification_report(y_test, nb_pred)) As we can analyze from the accuracy scores, confusion matrices and the classification reports of all the three models, we can conclude that that the Support Vector Classifier has outperformed the Logistic Regression model and the Multinomial Naive-Bayes model in this task. The Support Vector classifier has given about 100% accuracy in classifying the fake news texts. We can see a snapshot of the predicted labels for the news texts by support vector classifier in the below image.
In this article, we will train the machine learning classifiers to predict whether given news is real news or fake news. For this task, we will train three popular classification algorithms – Logistics Regression, Support Vector Classifier and the Naive-Bayes to predict the fake news. After evaluating the performance of all three algorithms, we will conclude which among these three is the best in the task.
["Deep Tech"]
["Classification", "logistic regression", "Naive Bayes classifier", "Support Vector Machine"]
Dr. Vaibhav Kumar
2020-06-22T15:00:00
2020
776
["Go", "Classification", "NumPy", "machine learning", "TPU", "programming_languages:R", "AI", "data_tools:Pandas", "Naive Bayes classifier", "logistic regression", "Support Vector Machine", "programming_languages:Go", "R", "Pandas"]
["AI", "machine learning", "Pandas", "NumPy", "TPU", "R", "Go", "programming_languages:R", "programming_languages:Go", "data_tools:Pandas"]
https://analyticsindiamag.com/deep-tech/hands-on-guide-to-predict-fake-news-using-logistic-regression-svm-and-naive-bayes-methods/
4
10
0
true
false
false
17,209
Analytics India Companies Study 2017
Each year we come out with our study of Analytics firms in India. The goal is to put numbers into the scale and depth of how various organizations around analytics and related technologies have surfaced in recent years. Here’s our annual study for this year. Read Analytics India Companies Study 2016 Read Analytics India Companies Study 2015 Read Analytics India Companies Study 2013 Read Analytics India Companies Study 2012 Key Trends Last year has since the biggest jump in the number of companies in India working on Analytics in some shape and form. More than 5,000 companies in India claim to provide analytics as an offering to their customers. This includes a small number of companies into products and a larger chunk offering either offshore, recruitment and training services. There is growth rate of almost 100% year over year in the number of analytics companies in India from last year. Moreover, the number of analytics companies in India are still very few in number, compared to the strength of analytics companies around the globe. In fact, India accounts for just 7% of global analytics companies. This is down from 9% last year. Company Size On an average, Indian Analytics companies have 179 employees on their payroll. It is an increase from an average of 160 employees since last year. On a global scale, this is quite a good number, as analytics companies across the world employ an average of 132 employees Almost 77% of analytics companies in India have less than 50 employees compared to 86% on a global level. Cities Trend Delhi/ NCR trumps Bangalore to house the most number of analytics firms in India this year, at almostΒ 28%. It is followed by Bangalore atΒ 25% and Mumbai atΒ 16% analytics companies. Hyderabad, Chennai and Pune are far behind with their percentages of analytics companies in single digits as reflected in the graphs above. However, these numbers seem to have not changed much since last year.
Each year we come out with our study of Analytics firms in India. The goal is to put numbers into the scale and depth of how various organizations around analytics and related technologies have surfaced in recent years. Here’s our annual study for this year. Read Analytics India Companies Study 2016 Read Analytics India Companies […]
["AI Features"]
[]
Π”Π°Ρ€ΡŒΡ
2017-08-24T09:56:06
2017
328
["Go", "programming_languages:R", "AI", "programming_languages:Go", "Git", "RAG", "Aim", "analytics", "GAN", "R"]
["AI", "analytics", "Aim", "RAG", "R", "Go", "Git", "GAN", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-features/analytics-india-companies-study-2017/
3
10
0
false
false
false
51,707
Baidu Goes On A Patent Frenzy; Applies For ML-Based Audio Synthesis Ownership
Baidu has come out on top as the leading artificial intelligence patent application leader eclipsing the likes of Tencent and Huawei. Reportedly, Baidu is also leading in the highly competitive area of intelligent driving, with 1,237 patent applications. After years of research, Baidu has developed a comprehensive AI ecosystem and is now at the forefront of the global AI industry. β€œBaidu retained the top spot for AI patent applications in China because of our continuous research and investment in developing AI, as well as our strategic focus on patents.” -Victor Liang, VP, Baidu via Baidu Baidu’s patents cover a wide variety of domains that include Deep learning (1,429 patents)NLP (938 patents)Β Speech recognition (933 patents) While Baidu topped the charts in China, its R&D centre in the US also had applied for patents in the US patent office. Especially in the speech recognition domain, Baidu has its eyes locked on audio synthesis using CNN. In this patent US20190355347A1 for a computer-implemented method for training a neural network model for spectrogram inversion comprising titled β€˜Spectrogram to waveform synthesis using convolutional networks’, it lists the following: Inputting an input spectrogram comprising a number of frequency channels into a convolution neural networks (CNNs)Outputting from the CNN a synthesised waveform for the input spectrogram, the input spectrogram having a corresponding ground truth waveformUsing the corresponding ground truth waveform, the synthesised waveform, and a loss function comprising at least one or more loss components selected from spectral convergence lossUsing the loss to update the CNN There is a clear mention of using the convolutional neural networks and since CNNs is the lifeblood of many modern-day ML applications, any claim even on a minor part can hurt in the long run. Perils Of A Patent Race The year 2019 witnessed a sudden growth of interest to own algorithms. So far, Google got a bad rap for going after batch normalisation, a widely used technique in deep learning. Even if the intentions are to safeguard the research from falling to pseudo players, this whole ordeal is a slippery slope where the owners can leverage the smaller firms using advanced technology. In the case of Baidu, too, there is a danger of losing ownership to many audio processing applications. Baidu is a Chinese company who has contributed to the growing fears amongst the ML community. Its AI vision was fortified with projects like Apollo, an open source autonomous driving platform along with other intelligent driving innovations. China has allegedly been involved in many intellectual property thefts from US companies. So, when Baidu’s foreign division, applies for a patent, one cannot help but think about the consequences of handing over the ownership to China continued to be the world’s leading source of counterfeit goods, reflecting its failure to take decisive action to curb the widespread manufacture, domestic scale, and export of counterfeit goods. According to a 2019 United States Trade report, China continues to be the world’s leading source of counterfeit goods responsible for widespread manufacture and export of counterfeit goods. https://twitter.com/PoliticalShort/status/1202682289408348160 The most important thing any country can do in the current era is protect its trade secrets. Even though Google has also been accused of patent race, adding China to the Baidu equations changes everything. The Chinese have a system that encourages transaction of intellectual property allowing every common guy to access cutting edge technology. How big does one get is a whole new argument. However, the widespread opening of overseas branches of the US companies in China, made IP transfer a bit reckless. This no doubt would have come as a shocker for the owners as there is a danger for new competitors to sprout using the stolen technology leading to billions dollars loss. This is a serious issue since China has been consistently notorious overseas when it comes to IP theft. Here are a few that got the spotlight as listed by Jeff Ferry, CPA Research Director: In 2004 Cisco Systems, took Huawei to court for stealing its core router software code and using it in Huawei routers. Huawei routers, widely used in China and Europe, have played a key role in Huawei’s growth into a $95 billion global telecom equipment giantIn 2011, AMSC filed the largest-ever IP theft case in a Chinese court, seeking $1.2 billion in compensation for their losses. AMSC partnered with a Chinese maker of the wind turbine hardware, Sinovel, to sell into the Chinese market. AMSC sales rose rapidly into the hundreds of millions of dollars. In 2011, AMSC discovered that Sinovel had an illegal copy of the entire AMSC software code on one of their windmillsIn 2015, the federal government charged six Chinese citizens with stealing wireless communications technology from two Silicon Valley microchip makers, Avago and Skyworks, and launching their own company to sell that technology in China. Apart from this, Huawei has also been accused of stealing the patented smartphone camera technologies a couple of months ago. The biggest concern for the developers regarding patenting can be distilled down to two words β€” infinite leverage. They fear that the aspirants will either be squeezed midway or get discouraged altogether from accessing state of the art technology, which again would lead to outcomes like the much dreaded AI winter.
Baidu has come out on top as the leading artificial intelligence patent application leader eclipsing the likes of Tencent and Huawei. Reportedly, Baidu is also leading in the highly competitive area of intelligent driving, with 1,237 patent applications.Β  After years of research, Baidu has developed a comprehensive AI ecosystem and is now at the forefront […]
["AI Trends"]
["Baidu", "CNNs", "Machine Learning", "Patent"]
Ram Sagar
2019-12-11T19:00:13
2019
871
["artificial intelligence", "TPU", "AI", "neural network", "ML", "Machine Learning", "Patent", "RAG", "NLP", "Aim", "deep learning", "Baidu", "CNNs", "R"]
["AI", "artificial intelligence", "ML", "deep learning", "neural network", "NLP", "Aim", "RAG", "TPU", "R"]
https://analyticsindiamag.com/ai-trends/baidu-patent-china-machine-learning-united-states-ip-theft/
4
10
2
true
false
false
10,103,431
Microsoft Doesn’t Really Need OpenAI, it Wants AGI
Striking at just the right moment, Satya Nadella, chairman and CEO at Microsoft, swiftly onboard Sam Altman at Microsoft. Altman will be joined by former president of OpenAI Greg Brockman and a few other researchers from the company in dire straits, notably Jakub Pachocki, the person leading GPT-4. Absorbing Altman and his team into Microsoft could possibly be the biggest bet Nadella has made in this nearly decade-long stint as the CEO of Microsoft, bigger than its billions of dollars of investment in OpenAI. However, with the way things are moving, by the time we publish this article, Altman might return to OpenAI, rendering our arguments irrelevant. Reports suggest that despite announcements, Altman and Microsoft is not a done deal. Nadella, in a recent interview with Bloomberg, stated that he will continue to support him and his team irrespective of where Altman is. However, it would make more sense for Microsoft to have Altman and the team at Microsoft rather than OpenAI. The startup’s faith currently remains undecided, even though they have a new CEO in Emmett Shear. Nadella, on the other hand, would want to get as much as OpenAI folks to join this new AI group led by Altman at Microsoft. The end-game OpenAI, which started off in 2015 as a non-profit, is focussed on achieving artificial general intelligence (AGI). As stated in one of their blogs, their mission has been β€œto build AGI that is safe and benefits all of humanity”. However, interestingly, according to OpenAI, Microsoft will not have exclusive rights to use OpenAI’s post-AGI model. Due to their USD 13 billion dollar investment in the company, currently, Microsoft has exclusive rights to use its models like GPT-4 and GPT-4 turbo. ( Source: OpenAI blog) Once AGI emerges, whether in the form of GPT-5, GPT-6, or an entirely new model, Microsoft will not possess exclusive rights to utilise that technology. Given Microsoft’s corporate nature driven by financial interests, it would want exclusive access to the technology and seek opportunities for monetisation regardless of its origin. β€œReality is that an in-house lab led by Sam and Greg might be better for Microsoft than the existing arrangement given the AGI clause,” Gavin Baker, managing partner & CIO at Atreides Management, said in an X post. Even if Microsoft successfully acquires this cutting-edge technology from OpenAI, the blog goes on to clarify that in a for-profit structure, there would be equity caps. These limits are designed to prioritise a balance between commercial objectives and considerations of safety and sustainability, rather than solely pursuing profit maximisation. Achieve AGI at Microsoft Nevertheless, if Altman and his top team collaborate at Microsoft within a carefully selected group, there’s a potential scenario where Altman could achieve AGI at Microsoft rather than at OpenAI.Β This scenario would grant Microsoft exclusive access to this technology, providing it with the opportunity to maximise its monetisationβ€”an unsettling but plausible prospect. This could be another reason Nadella was quick to get Altman and Brockman on board at Microsoft as soon as negotiations with the OpenAI board of directors faltered. After all, it was Altman who started the generative AI explosion by releasing ChatGPT to the world nearly a year ago. Come achieve AGI at Microsoft might possibly be the exact words Nadella expressed when he tabled the offer to Altman. So far, besides Altman, Brockman, and Pachocki, Aleksander Madry, and Szymon Sidor, all previously working for OpenAI, have agreed to join Altman’s new AI group. https://twitter.com/marktenenholtz/status/1726585324271481332 Appearing optimistic, Brockman also announced on X (previously Twitter) that they are going to build something new and it will be incredible. So far, not much is known about this newly formed group, which Altman will lead, besides that it will be a new advanced research team (possibly mission AGI). But it would be interesting to see how much of their work aligns with OpenAI’s. Microsoft does not really need OpenAI β€˜OpenAI is nothing without its people,’ almost all OpenAI employees tweeted yesterday in a synchronised manner resembling a coordinated X campaign, expressing solidarity with those who departed from the company. Moreover, nearly all of them have threatened to resign. Given the turmoil, many other companies working in AI are reportedly trying to poach OpenAI employees. Salesforce CEO Marc Benioff also posted on X, β€œSalesforce will match any OpenAI researcher who has tendered their resignation full cash & equity OTE to immediately join our Salesforce Einstein Trusted AI research team.” β€œThat talent is the crown jewel of the organisation,” Tammy Madsen, professor of management in the Leavey School of Business at Santa Clara University told TechCrunch. Given that Atlman is already on board, Microsoft would want to get more talents onboard from OpenAI and continue their pursuit of AGI at Microsoft. Brockman also declared on X that more will follow suit. This remains a likely scenario, however, these are uncertain times, and we will have to see how the whole situation pans out. But, Nadella, so far, has said that Microsoft remains committed to its partnership with OpenAI. β€œWe look forward to getting to know Emmett Shear and OpenAI’s new leadership team and working with them,” he posted on X. Currently, Microsoft is banking heavily on OpenAI’s models such as GPT-4 and will continue to need them until Altman’s AI group comes up with newer and better models. Moreover, the intricate nature of the clauses of the deal between Microsoft and OpenAI is not public yet. Interestingly, Altman’s new AI team could be working on exactly the same thing as OpenAI is, and a future scenario, where Altman’s team has achieved AGI, then Microsoft, may not possibly need OpenAI anymore. Furthermore, the duration of Microsoft’s ongoing financial support for OpenAI and potential shifts in strategy amid significant reshuffling pose intriguing uncertainties.Β The dynamics could again shift significantly, especially if the board at OpenAI resigns and Altman is reinstated as the CEO, altering the entire landscape of these arguments.
According to OpenAI, Microsoft will not have exclusive rights to use OpenAI’s post-AGI model.
["Global Tech"]
["Greg Brockman", "Sam Altman"]
Pritam Bordoloi
2023-11-21T15:18:19
2023
986
["Go", "ChatGPT", "Sam Altman", "GPT-5", "AI", "OpenAI", "Greg Brockman", "GPT", "generative AI", "Rust", "GAN", "R"]
["AI", "generative AI", "GPT-5", "ChatGPT", "OpenAI", "R", "Go", "Rust", "GPT", "GAN"]
https://analyticsindiamag.com/global-tech/microsoft-doesnt-really-need-openai-it-wants-agi/
2
10
3
false
false
false
10,119,210
Financial Times Enters into a Content Licensing Agreement with OpenAI
The Financial Times has entered into an agreement with OpenAI to license its content so that the AI startup can build new AI tools. According to a press release from FT, users of ChatGPT will see summaries, quotes, and direct links to FT articles. Any query yielding information from the FT will be clearly credited to the publication. The FT, which is already a user of OpenAI’s products, specifically the ChatGPT Enterprise, recently introduced a beta version of a generative AI search tool called β€œAsk FT.” This feature, powered by Anthropic’s Claude LLM, enables subscribers to search for information across the publication’s articles. β€œApart from the benefits to the FT, there are broader implications for the industry. It’s right, of course, that AI platforms pay publishers for the use of their material,” said FT chief executive John Ridding. β€œAt the same time, it’s clearly in the interests of users that these products contain reliable sources,” he added. This marks OpenAI’s fifth agreement within the past year, adding to a series of similar deals with prominent news organizations such as the US-based Associated Press, Germany’s Axel Springer, France’s Le Monde, and Spain’s Prisa Media. In December, The New York Times became the first major US media organization to file a lawsuit against OpenAI and Microsoft, alleging that these tech giants utilized millions of articles without proper licensing to develop the underlying models of ChatGPT.
Agreement comes as OpenAI seeks data from reliable sources to train latest AI models.
["AI News"]
["ChatGPT", "Microsoft", "OpenAI"]
Sukriti Gupta
2024-04-29T16:12:05
2024
233
["Anthropic", "ChatGPT", "OpenAI", "AI", "AWS", "GPT", "generative AI", "GAN", "R", "Microsoft", "startup"]
["AI", "generative AI", "ChatGPT", "OpenAI", "Anthropic", "AWS", "R", "GPT", "GAN", "startup"]
https://analyticsindiamag.com/ai-news-updates/financial-times-enters-into-a-content-licensing-agreement-with-openai/
2
10
3
false
false
false
10,049,972
Google Upgrades Translatotron, Its Speech-to-Speech Translation Model
Google AI has introduced the second version of Translatotron, their S2ST model that can directly translate speech between two different languages without the need for many intermediary subsystems. Automatically generated S2ST systems are made up of speech recognition, machine translation, and speech synthesis subsystems. Given this, the cascade systems suffer the challenge of potential longer latency, loss of information, and compounding errors between subsystems. To this, Google released Translatotron in 2019, an end-to-end speech-to-speech translation model that the tech giant claimed was the first end-to-end framework to directly translate speech from one language into speech in another language. The single sequence-to-sequence model system was used to create synthesised translations of voices to ensure the sound of the original speaker was intact. But despite its ability to automatically produce human-like speech, it underperformed compared to a strong baseline cascade S2ST system. Translatotron 2 In response, Google introduced β€˜Translatotron 2’, an updated model version with improved performance and a new method for transferring the voice to the translated speech. In addition, Google claims the revised version can successfully transfer voice even when the input speech consists of multiple speakers. Tests confirmed this on three corpora that validated that Translatotron 2 outperforms the original Translatotron significantly on translation quality, speech naturalness, and speech robustness. The model is also better aligning with AI principles and secure, preventing potential misuse. For example, in response to deep fakes being created with the Translatotron, Google’s paper states, β€œThe trained model is restricted to retain the source speaker’s voice, and unlike the original Translatotron, it is not able to generate speech in a different speaker’s voice, making the model more robust for production deployment, by mitigating potential misuse for creating spoofing audio artefacts.” Architecture Main components of Translatotron 2: A speech encoderA target phoneme decoderA target speech synthesiserAn attention module – connecting all the components The architecture follows that of a direct speech to text translation model with the encoder, the attention module and the decoder. In addition, here, the synthesiser is conditioned on the output generated by the attention module and the decoder. The model architecture by Google. How are the two models different? The conditioning difference: In the Translatotron 2, the output from the target phoneme decoder is an input to the spectrogram synthesiser that makes the model easier to train while yielding better performance. The previous model uses the output as an auxiliary loss only. Spectrogram synthesiser: In the Translatotron 2, the spectrogram synthesiser is β€˜duration based’, improving the robustness of the speech. The previous model has an β€˜attention based’ spectrogram synthesiser that is known to suffer robustness issues. Attention driving: While both the models use an attention-based connection for encoding source speech, in Translatotron 2, this is driven by the phoneme decoder. This makes sure that the acoustic information seen by the spectrogram synthesiser is aligned with the translated content being synthesised and retains each speakers’ voice. To ensure the model cannot create deep fakes like through the original Translatotron, the 2.0 uses only a single speech encoder to retain the speaker’s voice. This works for both linguistic understanding and voice capture while preventing the reproduction of non-source voices. Furthermore, the team used a modified version of PnG NAT to train the model to retain speaker voices across translation. PnG NAT is a TTS model that can transfer cross-lingual voice to synthesise training targets. Additionally, Google’s modified version of PnG NAT includes a separately trained speaker encoder to ensure the Translatotron 2 can zero-shot voice transference. ConcatAug ConcatAug is Google’s proposed concatenation-based data augmentation technique to enable the model to retain each speaker’s voice in the translated speech in the case of multiple speakers in the input speech. ConcatAug β€œaugments the training data on the fly by randomly sampling pairs of training examples and concatenating the source speech, the target speech, and the target phoneme sequences into new training examples,” according to the team. The results then contain two speakers’ voices in both the source and the target speech, and the model learns further based on these examples. Performance The performance tests verified that Translatotron 2 outperforms the original Translatotron by large margins in aspects of higher translation quality, speech naturalness, and speech robustness. Mainly, the model also excelled on Fisher corpus, a complex Spanish-English translation test. The model’s translation quality and speech quality approaches that of a strong baseline cascade system. Listen to the audio samples here. Source LanguageΒ frdeescaTranslatotron 2Β 27.018.827.722.5TranslatotronΒ 18.910.818.813.9ST (Wang et al. 2020)Β 27.018.928.023.9Training TargetΒ 82.186.085.189.3 Performance on the CoVoST 2 corpus. Source: Google Additionally, along with the Spanish-to-English S2ST, the model was evaluated on a multilingual setup. Here, the input speech consisted of four different languages without the input of which language it was. The model successfully detected and translated them into English. The research team is positive this makes the Translatotron 2 more applicable for production deployment after the mitigation of potential abuse.
Google claims the revised version can successfully transfer voice even when the input speech consists of multiple speakers.
["Global Tech"]
["Speech Analytics"]
Avi Gopani
2021-09-30T14:00:00
2021
816
["Go", "data augmentation", "TPU", "Speech Analytics", "AI", "programming_languages:R", "ML", "programming_languages:Go", "Aim", "R"]
["AI", "ML", "Aim", "TPU", "R", "Go", "data augmentation", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/global-tech/google-upgrades-translatotron-its-speech-to-speech-translation-model/
4
9
1
false
true
true
45,151
The Art In Data Science: From Visualisations To Storytelling
Data Science is a high-ranking profession that allows the curiosity to make game-changing discoveries in the field of Big Data. A report from Indeed, one of the top job sites has shown a 29% increase in demand for data scientists year over year. Moreover, since 2013, the demand has increased by a whopping 344%. So, what’s the reason for such demand? A Data Scientist’s fundamental skill is to write code. They are also advanced analysts who emphasise numbers and hidden insights in data. It makes them hardcore lovers of science and statistics. However, science is a complex branch that not everyone understands. What people understand easily, is art. Stakeholders from a non-technical background or busy business users find it hard and time-consuming to understand the science behind Data Science. More enduring would be, data scientists communicating insights in a simple language with memorable methods. Turning Data Science into Art Data scientists have understood the need for easy insight consumption. Hence, the last decade has seen tremendous growth in terms such as data visualisation, data art, and data stories. The pool of people discovering hidden insights has expanded beyond just data scientists and analysts. Data Storytellers and data artists are the new breeds, who believe in cultivating insightful stories rather than just bland insights. The ability to take data β€” to be able to understand it, to process it, to extract value from it, to visualise it, to communicate it β€” that’s going to be a hugely important skill in the next decades. Dr Hal R Varian, Chief Economist, Google Dr Hal quoted the above statement in 2006. It has been more than a decade and every word turns out to be true. Data storytellers not only play with the numbers to generate insights. They also ensure easy consumption of it. To make it happen, Data Science organisations are creating a hybrid team of creatives and analysts or creative analysts. It serves the purpose of both analysing the data and presenting the underlying story in the most appealing format possible. Statistics show why converting data into art is an intelligent way to consuming insights. MIT Survey says, 90% of the information our brain stores are visual. The same survey concludes that a human brain can process and understand any visual in just 13 milliseconds. A 1986 paper from the University of Minnesota states that our brains can process visuals 60,000 times faster than any textual or verbal context. In a survey from Wharton School of Business, only half of the audience was convinced by a verbal data presentation. Surprisingly, the numbers increased to 64% when visual language was embedded in it. The same survey also concluded that visualisations in presentations can shorten business meetings by 24%. A Nucleus Research report says that Business Intelligence (BI) with artistic data capabilities offers an ROI of $13.01 back on every dollar spent. Tools to Data Artistry There are many ways that data artists are taking the torrent of big data and transforming them into art. Data Visualisation: The basic definition of data art is data visualisation. Good visualisation acts like eye-candy and people remember it for a long time. Pie charts, histograms, line charts are traditional approaches to data visualisation. Whereas, Chords, Choropleths, Scatter plots, are new. However, they serve the same purpose, make information beautiful and visually contextualise non-obvious insights. Data Stories: Who doesn’t love stories? Stories are memorable. While data reveal surprising insights, stories make them worth consuming and memorable. The simple ingredients of a good data story are a problem, an approach, and a solution. Data storytelling companies offer actionable insights to their clients in the form of stories. The art of data storytelling comes with a range of endless creativity. Data Comic: Data Comics is a new addition to the family of data artistry. The idea is to go minimal in content and not lose focus on insights. A data comic reveals nothing but insights. Inspired by the language of comics, these are the novel way to communicate visual insights. Data comics have brought data-driven storytelling to a new edge. Data storytelling: The new playground of Data Scientists The modern-age data scientists are excellent writers and eloquent narrators. They take data storytelling as a structured approach to communicate complex data. Data, visuals, and narratives are the key elements of data storytelling. When a narrative is sprinkled on data, it helps the audience to quickly smell the importance of insights. They can quickly identify outliers and extremes from the data. An insight, no matter how small, is always important. Business users sometimes ignore a few insights calling them trivial. Narratives add ample summary and commentary and show the importance of insights. Many patterns and outliers are hidden inside the hefty rows and columns of an excel sheet. Data artists unearth these insights, beautify them, and serve them to the enterprises in a Petri dish. It accelerates decision-making in business users as they get to play with the transforming insights. The intersection of narratives, visuals, and data gives rise to better explanations of data, better consumption of insights, and better decisions. Ultimately, a well-crafted data story with all ingredients in place, drives change in organisations. And that’s how creative data scientists are using data storytelling as their new playground. Image credit: Brent Dykes Levels of Data Scientists Rising Above Code Earlier, the tools of data scientists were Excel, Python or R. But the uprising of AI and Machine Learning has significantly benefitted the process. It has also increased the demand for Data Science Professionals. In short, Advanced Analytics makes it easy to analyse big data. AI and its allies such as Deep Learning, Machine Learning, or Neural Networks are making businesses invest in them. A PWC report recently mentioned AI’s potential to add $15.7 Trillion to the Global Economy by 2030. This, in turn, will skyrocket the global economy by 14% of what we see today. This image shows 800 runs of a bicycle being pushed to the right. For each run, the path of the front wheel on the ground is shown until the bicycle has fallen over. The unstable oscillatory nature is due to the subcritical speed of the bicycle, which loses further speed with each oscillation. Image credit: Matthew Cook Firstly, it is good to see that even a trillion-dollar dream is not changing the mindset of data scientists. They are still focusing on telling insights in a memorable and interesting format. Secondly, complex technologies such as AI are now being made available to everyone through artistic approaches. Visionaries across the world are working on making AI simple and easy to use. Data art skills are helping in the process. As I mentioned earlier, people easily understand art rather than complex science. Outlook Data Scientists are now Data Storytellers, which is the most essential skill in the digital economy. Data storytellers communicate the drama hidden inside the numbers. The answer to data problems is not only insights. However, an end-to-end data consultancy accelerates decision making and informs businesses about considerable pain points. Data art and stories complete the cycle of data consultancy. If we want to make data easy for everyone, we need more data storytellers and artists than analysts and scientists.
Data Science is a high-ranking profession that allows the curiosity to make game-changing discoveries in the field of Big Data. A report from Indeed, one of the top job sites has shown a 29% increase in demand for data scientists year over year. Moreover, since 2013, the demand has increased by a whopping 344%.Β  So, […]
["AI Features"]
["Business Intelligence", "Data Visualisation"]
Sunil Sharma
2019-08-29T14:00:33
2019
1,203
["data science", "Go", "machine learning", "AI", "neural network", "Data Visualisation", "Git", "Python", "deep learning", "analytics", "Business Intelligence", "R"]
["AI", "machine learning", "deep learning", "neural network", "data science", "analytics", "Python", "R", "Go", "Git"]
https://analyticsindiamag.com/ai-features/the-art-in-data-science-from-visualisations-to-storytelling/
4
10
2
false
false
true
10,140,986
OpenAI Launches ChatGPT Desktop Version, Mirroring Microsoft’s Copilot
ChatGPT can now work with different apps on macOS and Windows desktops, OpenAI announced on X on 15 November. This marks the company’s first direct attempt at computer vision and agent control. ChatGPT 🀝 VS Code, Xcode, Terminal, iTerm2ChatGPT for macOS can now work with apps on your desktop. In this early beta for Plus and Team users, you can let ChatGPT look at coding apps to provide better answers. pic.twitter.com/3wMCZfby2Uβ€” OpenAI Developers (@OpenAIDevs) November 14, 2024 This early beta update claims to let ChatGPT examine coding apps to provide better answers for Plus and Team users. It not only assists with codes like VS Code, Xcode, Terminal, and iTerm2 but also talks to its users (through its voice assist feature), lets them take screenshots, upload files, and search the web (through SearchGPT). As reported earlier, Anthropic also made Claude Artifacts available to all users on iOS and Android, allowing anyone to create apps easily without writing a single line of code. A ChatGPT feature that becomes highly beneficial in desktop use is asking anything. Users can select any section of any document and open ChatGPT to ask for meanings, explanations, and feedback. This is a desktop implementation of ChatGPT’s most evident function. This development follows the discussions from a day ago about OpenAI’s agent, β€˜Operator,’ which is to be released in January 2025. Rowan Cheung, founder of β€˜The Rundown AI,’ speculates that the next step beyond this would be to allow ChatGPT to control and see desktops as an agent. OpenAI Follows Suit In October this year, Microsoft released its β€˜Copilot Vision’ to transform autonomous workflows with Copilot. According to Microsoft, these autonomous agents would be the new β€˜apps’ for an AI-driven world, executing tasks and managing business functions on behalf of individuals, teams, and departments. Meanwhile, the company also introduced ten new autonomous agents in Dynamics 365, to automate processes like lead generation, customer service, and supplier communication for organisations. Following that, Anthropic made a big announcement by releasing its new Claude 3.5 Sonnet which would control computers with the beta feature, β€˜Computer Use’. The company had reported that the model made significant progress in agentic coding tasks, which involved AI autonomously generating and manipulating code. This approach to Anthropic Claude’s computer feature stood out extensively as it didn’t rely on multiple agents to perform different tasks; instead, a single agent managed multiple tasks. As compared by AIM earlier, Microsoft integrated Copilot into MS Excel, while Claude directly operated Excel. This called into question the existence of Copilot. OpenAI wasn’t behind, even though this move by Anthropic and others (like Google Jarvis, speculated to release this month) had created a stronghold in the AI industry. OpenAI’s focus has also shifted to interface from expanding its features. OpenAI entered this race by introducing the Swarm framework, an approach for creating and deploying multi-agent AI systems. It was the missing piece that simplified the process of creating and managing multiple AI agents helping them work together to accomplish complex tasks. Following that, the launch of ChatGPT on desktops was a major step for a pioneer in AI to transform the way this chatbot is used, only to be enhanced by β€˜Operator’ in January. Now, the chatbot will be able to provide answers, be a companion, and assist with daily tasks.
OpenAI enters into computer vision and agent control.
["AI News"]
["ChatGPT", "OpenAI"]
Sanjana Gupta
2024-11-15T14:07:57
2024
550
["Anthropic", "ChatGPT", "Go", "OpenAI", "AI", "autonomous agents", "computer vision", "Aim", "Claude 3.5", "R"]
["AI", "computer vision", "ChatGPT", "OpenAI", "Claude 3.5", "Anthropic", "Aim", "autonomous agents", "R", "Go"]
https://analyticsindiamag.com/ai-news-updates/openai-launches-chatgpt-desktop-version-mirroring-microsofts-copilot/
2
10
2
true
false
false
10,135,702
Most Successful Companies are the Ones that Pivoted
The origin of many big companies is not as straightforward as it seems. Finding the correct product market fit can take months or even years for many, and is sometimes farther from the original idea. The term β€˜pivot’ was firstΒ  publicly used by Eric Ries, an entrepreneur and author, in his book about how course correction by founders is important for success. In India, two of the leading startups, Zepto and Razorpay, pivoted in their early days. Interestingly, both these unicorns are alumni of Y Combinator, the San Francisco-based startup school, which with less than 1% acceptance rate guides founders into the right pivot. Globally as well, several YC-backed companies like Clipboard Health, Brex, Goat, and Escher Reality, among many others went through cycles of feedback at YC to reach their consumer base. β€œThe idea maze is a perfect competition,” Garry Tan wrote on X, commenting on the recent launch of Void AI, which is interestingly, the fifth YC-backed code editor, in a market filled with AI editors. Should You Pivot Fast? Globally, the examples are plenty. However, below are a few companies that pivoted within a year of launch. Instagram, a social photo-sharing app, founded by Kevin Systrom and Mike Krieger in 2010, initially began as Burbn, a location-sharing app where people could check in and upload photos. However, in a year’s time the founders pivoted to solely focus on photo sharing, its chief and most-used feature. Instagram reported over two billion monthly active users as of early this year. Acquired later by Facebook (now Meta), the journey of Instagram – both in its pivot and acquisition – is a masterclass in strategy. Also, Twitter was originally a podcasting company called Odeo. It is interesting to note the social network’s evolution from that to what it is today. The launch of iTunes rendered the business model of Odeo useless, forcing the founders to build on a new idea. In October 2022, Elon Musk acquired Twitter, and rebranded it as X. Slack, a cloud-based communication platform for enterprises, was initially founded as an online gaming company called Glitch. Due to the lack of commercial traction, the founders decided to build on the chat feature, which was underrated at that time. In a way, Slack was the result of a Glitch! YouTube, a leading online video platform, found its start as a dating site where people uploaded videos talking about their partners. But within a week, it realised that the idea was not very unique. By generalising the core product beyond just dating videos, their internal tech was used to creating the video-sharing app. This was one of the first videos uploaded on YouTube, posted by one of the founders, Jawed Karim. WhatsApp, the messaging giant, also has a similar story which started as an app merely for sharing statuses with friends. PayPal, an online payments platform, started out as an encryption services application known as Confinity. Its journey to what it is today included not one but multiple pivots. Later, eBay acquired PayPal in a deal valued at $1.5 billion. Some Took More than a Year For instance, Hugging Face, an AI and machine learning collaborative platform, began as an entertainment app. After two-and-a-half years, the founder pivoted by launching a model he was working on. This one immediately gained traction. Notion, the all-in-one productivity platform, had its origins as a website builder. With a very unsuccessful start, the founders took four years to pivot and build on its collaborative feature. The founders fired the team and relocated to Kyoto to rebuild the app from scratch. Twitch, a famous streaming platform, initially began as a 24-hour reality TV webcaster, streaming Justin Kan’s life. It took the founders five years to double down on the games and streaming aspect of the startup to differentiate it from the rest. The core product was applied to a different problem. Later, Amazon acquired Twitch for nearly $1 billion. The Trend Continues Among Startups Since the launch of OpenAI’s ChatGPT in 2022, a lot of startups have pivoted towards building an AI-first product or solution. Earlier this year, SoftBank signalled its pivot towards becoming AI-focused, after funding several AI-driven startups in the ecosystem. Funding for AI startups in India totalled $8.2 million in the April-June 2024 quarter. Despite the enthusiasm, a lot of AI startups get acquired by a larger corporation, due to the challenges like funding. In order to stand out, it is imperative for startups to focus on building the next LLM instead of building existing use cases of AI. A lot of founders go through the dilemma and decision-making of either merging, pivoting, or getting acquired when their startup does not perform well in the initial days. Pivoting May Not Always Be The Answer As seen above, while pivoting is considered normal, and sometimes even healthy for startups, there are many times when founders should be cautious of it. β€œIf you pivot over, and over, and over again, it causes whiplash. Whiplash is very bad because it causes founders to give up and not want to work on this anymore, and that actually kills the company. Weirdly, it’s more deadly to your company to get whiplash and get sad than to work on a bad idea,” said Dalton Caldwell, partner at Y Combinator. There is even a term for it within the YC community, known as β€˜Pivot Hell’, which founders must avoid it at all costs.
β€œThe idea maze is a perfect competition.”
["AI Features"]
["Startups"]
Aditi Suresh
2024-09-18T19:12:27
2024
904
["Go", "ChatGPT", "Hugging Face", "machine learning", "OpenAI", "AI", "GPT", "CLIP", "GAN", "Startups", "R"]
["AI", "machine learning", "ChatGPT", "OpenAI", "Hugging Face", "R", "Go", "GPT", "CLIP", "GAN"]
https://analyticsindiamag.com/ai-features/most-successful-companies-are-the-ones-that-pivoted/
3
10
5
true
true
false
10,115,679
Oracle Enhances Cloud Suite with Additional AI Features for Key Business Areas
Oracle at its flagship event Oracle CloudWorld London has announced the integration of new generative AI capabilities into its Oracle Fusion Cloud Applications Suite, a move set to significantly enhance decision-making and user experiences across various business domains. The suite now includes over 50 generative AI use cases, built on Oracle Cloud Infrastructure (OCI) and designed to respect enterprise data, privacy, and security. These capabilities are embedded within the business workflows of finance, supply chain, HR, sales, marketing, and service, aiming to boost productivity, reduce costs, and improve both employee and customer experiences. β€œWe have been using AI in our applications for several years and now we are introducing more ways for customers to take advantage of generative AI across the suite,” said Steve Miranda, executive vice president of applications development at Oracle. β€œWith additional embedded capabilities and an expanded extensibility framework, our customers can quickly and easily take advantage of the latest generative AI advancements.” In the realm of Enterprise Resource Planning (ERP), the suite now includes insight narratives for anomaly and variance detection, management reporting narratives for finance professionals, predictive forecast explanations, and generative AI-powered project program status summaries and project plan generation. For Supply Chain & Manufacturing (SCM), the suite offers item description generation to aid product specialists and supplier recommendations to streamline procurement processes. Additionally, negotiation summaries are now generated more efficiently with AI assistance. Human Capital Management (HCM) benefits from job category landing pages for better candidate engagement, job match explanations to assist candidates in finding suitable roles, a candidate assistant for common inquiries, and manager survey generation for timely employee feedback. Customer Experience (CX) is enhanced with service webchat summaries for call center agents, assisted authoring for sales content to improve productivity, and generative AI for marketing collateral to optimize audience engagement. Oracle’s approach ensures that no customer data is shared with large language model providers or seen by other customers. Role-based security is also embedded directly into workflows, ensuring that only entitled content is recommended to end users. The generative AI capabilities are expected to have a profound impact on customers and industries by streamlining operations and enabling more efficient and informed decision-making processes.
The suite now includes over 50 genAI use cases, built on OCI and designed to respect enterprise data, privacy, and security.
["AI News"]
["Oracle"]
Shyam Nandan Upadhyay
2024-03-14T15:57:57
2024
361
["Go", "API", "programming_languages:R", "AI", "ML", "programming_languages:Go", "Oracle", "Aim", "ViT", "generative AI", "R"]
["AI", "ML", "generative AI", "Aim", "R", "Go", "API", "ViT", "programming_languages:R", "programming_languages:Go"]
https://analyticsindiamag.com/ai-news-updates/oracle-enhances-cloud-suite-with-additional-ai-features-for-key-business-areas/
2
10
3
false
false
false
End of preview. Expand in Data Studio

Analytics India Magazine Technical Articles Dataset πŸš€

Dataset Description

This comprehensive dataset contains 25,685 high-quality technical articles from Analytics India Magazine, one of India's leading publications covering artificial intelligence, machine learning, data science, and emerging technologies.

✨ Dataset Highlights

  • πŸ“š Comprehensive Coverage: Latest AI models, frameworks, and tools
  • πŸ”¬ Technical Depth: Extracted keywords and complexity scoring
  • 🏭 Industry Focus: Real-world applications and insights
  • ⚑ Multiple Formats: JSON and optimized Parquet files
  • 🎯 ML Ready: Pre-processed and split for training

Dataset Statistics

Metric Value
Total Articles 25,685
Technical Articles 25,647
Average Word Count 724 words
Language English
Source Analytics India Magazine

🎯 Technologies Covered

AI & Machine Learning

  • Large Language Models: GPT, Claude, Gemini, Llama
  • Frameworks: TensorFlow, PyTorch, Hugging Face
  • MLOps Tools: MLflow, Weights & Biases, Kubeflow
  • Agent Frameworks: LangChain, AutoGen, CrewAI

Programming & Tools

  • Languages: Python, JavaScript, SQL
  • Cloud Platforms: AWS, Azure, GCP
  • Development: APIs, Docker, Kubernetes

πŸ“Š Dataset Structure

Core Fields

  • title: Article title
  • content: Full article content (cleaned)
  • excerpt: Article summary
  • author_name: Article author
  • publish_date: Publication date
  • url: Original article URL

Technical Metadata

  • extracted_tech_keywords: Technical terms found in content
  • technical_depth: Number of technical keywords
  • complexity_score: Technical complexity (0-4)
  • word_count: Article length
  • categories: Article categories
  • tags: Content tags

Quality Indicators

  • has_code_examples: Contains code snippets
  • has_tutorial_content: Tutorial or how-to content
  • is_research_content: Research or analysis
  • has_external_links: Contains external references

πŸ“‹ Dataset Splits

Split Examples Purpose
Train 19,221 Model training
Validation 2,136 Hyperparameter tuning
Test 3,769 Final evaluation

πŸš€ Quick Start

Using Hugging Face Datasets

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("abhilash88/aim-technical-articles")

# Access splits
train_data = dataset["train"]
test_data = dataset["test"]

# Filter technical articles
technical_articles = dataset.filter(
    lambda x: x["technical_depth"] >= 3
)

Using Pandas

import pandas as pd

# Load from JSON
df = pd.read_json("aim_full_dataset.json")

# Load from Parquet (faster)
df = pd.read_parquet("aim_full_dataset.parquet")

# Convert list columns back from JSON strings
import json
df['categories'] = df['categories'].apply(json.loads)
df['extracted_tech_keywords'] = df['extracted_tech_keywords'].apply(json.loads)

🎯 Use Cases

Machine Learning

  • Text Classification: Topic classification, difficulty assessment
  • Content Generation: Article summarization, content creation
  • Recommendation Systems: Technical content recommendations
  • Question Answering: Technical QA systems

Business Intelligence

  • Trend Analysis: Technology trend identification
  • Market Research: Industry insights and analysis
  • Content Strategy: Editorial planning and optimization

Education & Research

  • Curriculum Development: AI/ML course creation
  • Knowledge Mining: Technical concept extraction
  • Academic Research: Technology adoption studies

πŸ“¦ Available Files

Standard Formats

  • aim_full_dataset.json - Complete dataset
  • aim_full_dataset.csv - CSV format
  • aim_full_dataset.parquet - Optimized Parquet format

Specialized Subsets

  • aim_quality_dataset.json - High-quality articles (300+ words)
  • aim_technical_dataset.json - Highly technical content
  • aim_tutorial_dataset.json - Educational content
  • aim_research_dataset.json - Research and analysis articles

ML-Ready Splits

  • train.json / train.parquet - Training data
  • test.json / test.parquet - Test data
  • validation.json / validation.parquet - Validation data (if available)

πŸ“ˆ Content Quality

  • Duplicate Removal: All articles are unique by ID
  • Content Filtering: Minimum word count requirements
  • Technical Validation: Verified technical keywords
  • Clean Processing: HTML removed, text normalized
  • Rich Metadata: Comprehensive article classification

βš–οΈ Ethics & Usage

Licensing

  • License: MIT License
  • Attribution: Analytics India Magazine
  • Usage: Educational and research purposes recommended

Content Guidelines

  • All content is publicly available from the source
  • Original URLs provided for attribution
  • Respects robots.txt and rate limiting
  • No personal or private information included

πŸ“š Citation

@dataset{aim_technical_articles_2025,
  title={Analytics India Magazine Technical Articles Dataset},
  author={Abhilash Sahoo},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/abhilash88/aim-technical-articles}
}

🀝 Contact & Support

For questions, issues, or suggestions, please open a discussion on the Hugging Face dataset page.

πŸ”„ Updates & Versions

  • Version 2.0 (Current): Enhanced processing, technical depth scoring
  • Last Updated: 2025-07-11
  • Processing Pipeline: Optimized extraction with 2025 tech coverage

🎯 Ready to power your next AI project with comprehensive technical knowledge!

This dataset captures the cutting edge of AI and technology discourse, perfect for training models, research, and building intelligent applications.

Downloads last month
160