I am a Delhite currently Studying in IIIT Hyderabad. I believe in not being Jack of All Trades but being the King of all. I have been given a lot of responsibilities since my childhood. I was a shy and introvert till I was in 10th standard, but a series of events got me out of it all. I was a part of the student council which dealt with the management of school level events. I was appointed as the head, i.e the Head Boy, of the School Council which gave me plenty of invaluable experience.
I was sort of a Jock as well in school, I used to play Basketball, Football, Table Tennis, etc. for my school. We were able to win some trophies for our school as well. Currently I am the CTO for my startup named Abatar. I am not much of a reader but love philosophy. I sure am a melomaniac and a bag-packer. I love to go on treks and solo trips. I am a Green Panther and would never miss an opportunity to go for a trip with a stunning view.
Regarding my interests in tech, I love machine learning and am a bit experienced in it as well. I would love to build a human robot one day, solving the turing test.
I am good with teamwork and management, currently I hold the head of Corporate Relations team of IIIT Hyderabad's Entrepreneurship Cell. Also I am an admin for the Open Source Development Group of IIIT.
Working as a SWE intern in Google BLR. My project involves migration of an
internal Google tool name Billy to a new platform. This majorly involves stack
development in Java and design thinking skills.
My project was to migrate an internal Google tool to a different framework as a part of migration from Google Web Toolkit. TO do this I had to study boq, guice, proto buffers, apps framework, promisegraphs and many more new technologies used by google. My work involved creating an independent server which is currently running in production.
CrowdAI enables data science experts and enthusiasts to collaboratively solve real-world problems, through challenges.
I am working as a web developer to solve some issues with the web app, as well as revamp the application as a whole. This involves mainly working with the MVC structure of rails 5 and deployment on heroku.
I am mainly responsible for creating a rating algorithm as well as redesigning the whole web application.
Understanding controversies in online news is crucial for journalists, online social networks, and policy makers. In this research, we present a new metric, Controversy Score, for detection of controversial content in social media. The score employs a statistical approach that infers the controversiality of content from the toxicity of its comments, such that the toxicity distribution approximates either an `M' or `U' shaped distribution. We validate the approach using a dataset of 180,733 YouTube comments from an online news publisher. In addition, we build a predictive model to score controversiality of a news story even when its comments are disabled. Our findings suggest that the most engaging videos are also the most controversial ones. Furthermore, a qualitative analysis of the controversial themes suggests that the framing of the story impacts its controversiality.
This project is an extension of last year's GSoC project. In this summer I have added various new functionalities to the existing leaflet-blurred-location library and created another library to display locations saved by leaflet-blurred-location in a tricky way so that no information is leaked without the user's permission. This project involved refactoring of the existing code and adding CodeClimate to the repository as well. New UI tools were also added to leaflet-blurred-location along with resolving some previous bugs as well.
For leaflet-blurred-location-display, almost the whole repository was set up and integrated with leaflet-blurred-location to provide a demo. The demo for leaflet-blurred-location-display is live with all the features of leaflet-blurred-location as well.
Working as a senior data scientist to create working production models to track man hours for workers working in a factory. This was further extended to check uniform and safety equipment used by the workers. The project majorly involved working with computer vision models and modifying some pre existing techniques to get the desired result we wanted.
The technologies used in this project were python, tensorflow, pytorch, CNN, ANN and parts of Image processing.
Rating Online Content Based on Toxicity of Its Comments.
There are a plethora of works focusing on detection and classification of online hate. Yet, few studies aim at developing metrics for decision makers to better understand online hate in their social media content.
This research undertakes that challenge and introduces provocation score, a measure for hatefulness of online videos, based on the number and intensity of hateful comments a video receives. Easily understandable metrics and visualizations of the prevalence of hate in different content pieces is crucial for community moderators and content creators when they evaluate how provocative their content is, and plan their production activities.
Abatar helps Celebrities/Influencers to understand and communicate with their audiences personally at Scale. It helps users to make a better social profile with lesser maintenance and hard work. You could get your social profile on all social platforms, and you will be updated/suggested of the things that are going on and what should you be posting. It will predict the number of likes and comments you would receive based on your posts.
We use machine learning to predict all this based on your history. Later versions include posting on your behalf, recommending people to follow, how to increase followers, etc. You can learn more about it here
Creating pictures of faces using Generative models. Working with various conditions such as ethnicity, age, gender to generate faces using stack GANs after preprocessing for Automated Persona Generation. I am using multiple layers in the stack, each layer consisting of generators and discriminators of different dimensions. This idea was taken up by the NVIDIA paper on high resolution face generation, but now we have to do it with conditions on faces.
We focus on feeding face structure, complexion and age as the input features and would get a new face in return. Later stages include Automated Persona Generation where one can build a whole new person profile online using only some input features.
Creating a recommender system to suggest the next destination of a passenger with his/her basic information such as age, gender, etc. and some information from the passengers' past flight records. Using LightFM and Surprise algorithms.
LightFM is a Python implementation of a number of popular recommendation algorithms for both implicit and explicit feedback.
It also makes it possible to incorporate both item and user metadata into the traditional matrix factorization algorithms. It represents each user and item as the sum of the latent representations of their features, thus allowing recommendations to generalise to new items (via item features) and to new users (via user features).
Surprise provides various ready-to-use prediction algorithms such as baseline algorithms, neighborhood methods, matrix factorization-based ( SVD, PMF, SVD++, NMF), and many others. Also, various similarity measures (cosine, MSD, pearson…) are built-in.
Our ensemble model proved to give good results with accuracies upto 87%.
Using selenium to bypass captcha and parsing information regarding property sales in US, on a regular basis. Giving out the output as a formatted csv as requested by the client.
Working as a researcher to write a research paper on eye-tracking data, creating a deep learning model to track eye-movement on some templates and came out with good results and accuracies upto 95%. Other paper included predicting success rate of finnish facebook ads based on appearance of the thumbnails and text in the ad. Potentially might turn into a big project.
Working as a researcher to create a deep learning neural network to predict the success rate of a given Facebook Ad. Taking input as the basics of the Ad such as Thumbnail, description, etc. and giving out predictions of number of hits, likes, etc.
Made a few scripts and web crawlers for SMODEX (a startup) to gather some data. Currently working on creating a deep learning NLP model for resume parsing.
Created a javascript library to add additional features to leaflet maps. Adding a variety of features and integrating them to the main website for publiclab. Future prospects include expanding this library to be used in other organizations as well.
Worked with the UI/UX revamp of the intranet website for the petroleum ministry under National Informatics Centre. Used html and css to build the frontend and javascript for the backend and content managing.
The programme aims to provide core concepts and skills in the broad areas of engineering, sciences, mathematics and humanities, supported by practicums. The practicums are intended to give students a first-hand understanding of leveraging their initial breadth (and skills) in programme and applying it to develop solutions for real-life problems.
A Leaflet-based HTML interface for selecting a "blurred" or low-resolution location, to preserve privacy. Leaflet.BlurredLocation also tries to return a human readable string description of location at a specificity which corresponds to the displayed precision of the location selected.
View ProjectAn AI bot playing connect 4 using alpha beta pruning and mini max algorithm. Works on 3 levels depending on the depth till which the AI bot predicts the moves. Everything is done in C++.
View ProjectContributed to freedombox which is a free software self-hosting web server to deploy social applications on small machines. It provides online communication tools respecting your privacy and data ownership.
View ProjectBuilt a web scraper to get resumes from https://indeed.com/ using email accounts as payloads and creating new sessions in some intervals of time making it as humanly as possible, preventing it from getting banned by indeed. Successful in getting about 10k resumes and can get more by just entering job description and location.
View SampleThis project is to visualize the data and train an ARIMA model for each language category combined. Here we plan on using an ensemble of ARIMA (Autoregressive integrated moving average) and LSTM (Long Short Term Memory) to predict a time series problem, i.e predicting traffic on Wikipedia pages given some data of the past.
View Project