Category Archives: Uncategorized

Will Uber be consumed by its own fire?

“His biggest strength is that he will run through a wall to accomplish his goals, His biggest weakness is that he will run through a wall to accomplish his goals.”

 

That’s how a New York Times article described Travis Kalanick, the ex-CEO of Uber. Travis founded Uber in 2009, and his last employee address was a tearful farewell. He has built an empire on a competitive and unforgiving workspace. This empire is worth more than the most automakers. It is arguably the most influential transport company. But, has Uber become a bubble too big that it is about to burst?

Uber grew exponentially in the last five years. Its net worth has doubled in a few months, but Uber has never reported a profit. Its policies have made it the public enemy #1 many times, most recently in London where it was banned by the TfL. Uber has bet on AI has its future. For years. Travis has argued that Uber will slowly become profitable once it replaces the human drivers with self-driving cars. But, this has led to the most damaging lawsuit against Uber. Waymo, an Alphabet company, sued Uber for stealing Google’s LIDAR technology. Anthony Levandowski, the CEO of Otto, was sued downloaded Waymo’s highly confidential files and trade secrets before resigning to found Otto which was then bought by Uber. Waymo alleged that Uber knew that Otto had Waymo’s secret files. This was the final nail in Travis’ coffin which had been dug up a few months ago.

 

The first nail was the mistreatment of employees by managers at Uber. Uber, in response, fired over 20 employees. Then was the video of Travis Kalanick abusing an Uber driver. But these did not end Travis’ time at Uber, the impossible dream he was selling did.

 

Uber is available in over 60 countries and has a loyal customer base yet it failed in China. Didi Chuxing bought Uber China. In India, Ola is the dominant player in the taxi market. In the USA, Lyft has finally started to come out of Uber’s shadow and is slowly expanding to more cities. Travis had argued that if Uber employees only self-driving taxis, the operating cost will drop drastically. But, was this even possible in less than half a decade? Uber gave up on developing its own tech when it agreed to buy Volvo’s self-driving cars. This dream seems too distant to be true. The Waymo lawsuit will kill this idea.

 

Travis built his company on the idea of making the impossible dream of self-driving cars a reality, quickly. Uber’s net worth is built on this idea. Uber, it seems is being killed by its own fire. The fire it lit and changed the automobile market forever.

Sudhanshu Agarwal

Mafia 3

Released in October 2016, Mafia 3 is probably the best game in the Mafia series so far. Developed by Hangar 13 and published by 2K Games, the third installment in the Mafia franchise typically focuses on a huge open world environment . Unlike the previous games in the series, it focuses on the aspects of revenge and vengeance, building our own gang and going against the mafia  and burning them to the ground with military-grade weaponry. It is available on PS4, XBOX ONE and PC. Winner of the NAVGTR award for the best video and dramatic score, this game is surely a good pick for action loving gamers.

PLOT

Mafia 3 is set in 1968, New Bordeaux, Louisiana which is also the hometown of our hero, the protagonist Lincoln Clay; with military training and combat experience in Vietnam, Lincoln is the tough guy you can rely on for anything.

After serving in the Vietnam War in the 5th Special Forces and 223rd infantry regiment , Lincoln Clay comes back to his surrogate family i.e. the black mob led by Sammy Robimson and his adopted brother Ellis. They team up with the italian mafia led by Don Marcano to loot the  Louisiana Federal Reserve, but after the heist they are betrayed and Lincoln is the only one who manages to survive. The story revolves around Lincoln building a new family on the ashes of the old and blazing a path of revenge on the brutal criminal underworld responsible.

GAMEPLAY

The gameplay mostly involves driving around New Bordeaux, getting missions from various characters, reaping the  rewards, and occasionally get sidetracked along the way. The story missions are big and are set in unique, well-designed locations. This includes fighting your way through a creepy abandoned amusement park, escaping the bank vault after a heist, and sneaking aboard a sinking riverboat by swimming through water filled with gators all provide some memorable action-packed moments.

For the transportation the player gets a lot of options to choose from, ranging from Exotic 60 coupes like the super cool “Griffin Marauder” to trucks like “Bulworth Buckliners” as well as boats and other military vehicles. The fastest way is to drive around the vast map.

My favourite mode to get around is through the sportscar  ‘Berkely Stallion’ . It is easy to maneuver, decent speed but highly durable.

BUT, there are certain other issues as well. The version of the game released for the consoles has lag and a few bugs. When the intense action and battles starts, the frame rate drops below 30fps. This causes a glitch to appear and the protagonist may end up dying while engaged in combat in the middle of the mission.

For instance the initial mission in which  I was supposed to go and kill the leader of the Haitian mob, Baka. I was trying throw molotov cocktails but thy ended up burning me as  I couldn’t throw them cause of the glitch.

Also the story missions are very repetitive as we do the same things like  which leads to the person playing getting bored.

COMBAT

The game does justice to the action and mayhem part associated with the Mafia series.

Using the huge arsenal of Vintage weaponry and diving around in stunning antique vehicles you set  New Bordeaux on fire wherever you go. The approach is yours to choose from: stealth or full-front attack.

The game, a third-person shooter, has more of  mid to long range gunplay. We have missions which involve more of melee combat as well. Lincoln has his signature knife and carries a light and a heavy weapon along with grenades.

The game features a cover feature, allowing players to take cover behind objects to avoid enemy gunfire or detection. Players can interrogate non-playable characters  after defeating them in order to gain more information on their objectives, like scaring them while driving a car. Players can attack and overrun locations owned by the Italian mob, and assign one of their lieutenants to operate around the local area. The game allows players to drive cars from the era with realistic driving mechanics and sounds.

The light weapons usually include Shotguns like Remington 870 and Ithaca 37; Submachine guns like Heckler and Koch MP54 or the M1A1 Thomson and a wide rage of  pistols and revolvers.  Rocket and grenade launchers Hartmann HLP Grenade Launcher, Hartmann AT-40 Rocket Launcher and Explosives like Molotov Cocktail, Screaming Zemi (used as a distraction), Frag Grenade, C4 Charge.

The most useful ways to kill enemies are whistling to get their attention while  in stealth and silently stabbing them or just using the basic Colt M19 pistol. Enemies don’t really get any stronger or learn any new tricks as we progress, so we can use the same tactics.

If you ever run out of ammo or want a specific weapon or vehicle…. you can use the awesome in-game feature of  calling up allies and weapons dealers who arrive in vans and supply you with whatever you need.

PROS

  • Large Map with  freedom and no invisible in-game walls.
  • Classy and vintage arsenal of weapons and vehicles to choose from.
  • Old School Action! Melee combat and high speed chases.

CONS

  • Lag, slight glitches and performance bugs.
  • Repetitive Missions …. makes the game boring.
  • Loading time is long.

RATING: 7/10

Kartik Gupta

A Basic Introduction to Machine Learning Algorithms

A Basic Introduction to Machine Learning Algorithms

Machine learning is a field in computer science, which gives computers the ability to learn without explicitly being programmed to do so, and has widespread applications across several fields, such as diagnosis of diseases, optical character recognition, computer vision and email filtering. Many machine learning techniques are in-fact being used on a day-to-day basis for technologies like smart advertisements, friend recommendations as well as suggested search results, and we interact with them all the time.

Machine learning emerged from similar artificial intelligence fields like pattern recognition, and computational learning, and it heavily relies on statistics and mathematical optimizations. Many classification problems including anomaly-detection problems, can be solved using different machine learning algorithms put together. These algorithms form the backbone of the many technologies for use of artificially intelligent systems.

Machine Learning algorithms are broadly divided into three categories that take on different approaches to help computers learn how to solve tasks on their own. These are: Supervised Learning, Unsupervised Learning and Reinforcement Learning.

Supervised Learning

 

This approach to machine learning involves giving a set of inputs and outputs to a computer, and training it based on a dataset of ‘training examples’ we gave. Then it can be further used to predict new values when new inputs are provided to it.

Typically the dataset consists of pairs that consist of an input vector, and a corresponding output. The task of the machine is generally to determine a function such that on taking an argument of a new input vector, it should be able to output a value which has a high rate of accuracy of being the correct output.

This criterion leads to the necessity what is known as a bias-variance trade-off. A high-bias, low-variance machine learning model will overfit the data to even accommodate the outliers and the anomalies, and will not be able to adjust to the new inputs that are provided to it. Meanwhile, a high-variance, low-bias model will underfit the data, and though it will generalize itself well to different data, its accuracy will be low and not fit any data particularly well.

Thus most algorithms provide a parameter that adjusts this bias-variance tradeoff so that the model generalizes well to other data outside the training data with higher accuracy.

Unsupervised Learning

 

This is used to infer results from unclassified data, where the data is provided in the form of unlabelled points without any output values given. Then, this data is grouped into separate categories based on different criterion such as the proximity of the data points, etc.

A notable difference between unsupervised and supervised machine learning algorithms is the absence of an accuracy feature to check the viability of the unsupervised model. This is useful when we need to find hidden structures in unlabelled data, and can draw inferences from it.

Data can in general, be grouped into clusters which are similar in some respect, and these hence create a new structure for finding patterns within the data. This is especially useful in data where such information or structure would have usually gone unnoticed.

 

Reinforced Learning

 

This is an area in machine learning which is inspired by behaviourist psychology, in which the program takes actions depending on which it is either rewarded or penalized. It utilises dynamic programming techniques and is often studied in many other disciplines, such as game theory, statistics as well as information theory. This approach helps the model make the ideal choices given a particular situation.

There exist many different solutions to such problems, however, common ones include those which will provide greater reward in the long run, as opposed to immediately after. Its applications are manifold, ranging from controlling robotic arms to programming robots to avoid obstacles by creating penalties for every obstacle hit. Logical games, such as chess, can also be programmed to be played by such models.

Machine Learning is a wide and varied field with plenty of potential uses that can improve not only our technology, but also our lives. In a world filled with pessimistic warnings about the potential impact of AI on our lives, it is important to remember that such models have several advantages, too, and have led to groundbreaking research in many fields, ranging to diagnosis of diseases such as cancer simply by clicking a photo to cybersecurity and financial analysis. As in any such debate, understanding how they work is often one step closer to arriving at a compromise or a solution.

Perhaps we can avoid Judgement Day, after all.

Sagnik Anupam

The Future of Computing Lies In The Cloud

THE FUTURE OF COMPUTING LIES IN THE CLOUD

Cloud computing, often referred to as simply “the cloud,” is the delivery of on-demand computing resources — everything from applications to data centers — over the internet on a pay-for-use basis. It uses a remote server connected to the client by internet for using services like storage, processing, software etc.

There are many ways of implementing cloud    computing. The three major ways are:

 

  1. Software as a Service (SaaS): The required software is installed on a server and can be used by the client through the internet. They ensure that the same version of the software is accessible almost everywhere. Examples include Google Docs, Microsoft Office 365, Agile CRM.

  1. Platform as a Service (PaaS): The developers are provided a development and deployment platform on the cloud where they can upload their code snippets and let the cloud decide how to manage the resources needed to execute it. This enables the developers to create applications without worrying about server management. This is extremely useful for small startups or new developers. For example, AWS Elastic, Microsoft Azure, Google App Engine, SAP Cloud Platform and so on.
  2. Infrastructure as a Service (IaaS): IaaS involves using a remote server for storage and computation over the Internet. The main limitation of IaaS is data bandwidth. It can be used effectively only when the data bandwidth is comparable to the corresponding bandwidth of the local system. For instance, the average internet download speed in the USA is 70Mbps or 17.5MB/s. This is comparable to the average speed of 20MB/s of a USB2.0 external hard disk drive. So, storing data in the cloud has become more feasible now. For computation, services like Google Compute Engine are employed to analyze huge data in a batch-wise processing fashion. They can double up as primary servers or as additional servers in times of high demand. A few examples include Google Drive, Amazon Web Services, Google Compute Engine.

 

What is the future?

 

Due to the advantages of scalability and low maintenance costs associated with it, most businesses will shift their workload to the cloud. As developers need not care about managing the servers, this will lead to a “serverless architecture”.

 

I believe that in the future, the cloud is going to be responsible for a large portion of computation tasks of devices, and these devices will become essentially “thin clients”, ie. they will only have to compute enough power to manage the I/O devices and basic computations for networking as most of the computation tasks will be outsourced to the cloud.

 

The 5 main factors which will affect the future of cloud computing are:

  1. Scalability: The cloud is able to easily allocate resources for a task and use the unallocated resources for another task, thereby being more efficient than local computing. In traditional computing, the system remains idle for that time when there is no task. Also, whenever there is a surge in demand, the system hangs or crashes for some time as it is unable to get more resources for the more tasks in a traditional computing architecture. You must have experienced this in situations like booking tickets, checking results etc. The cloud solves this problem by adjusting the resources, so, in case of a surge in demand, it can provide more resources to stop the system from hanging.
  2. 5G: The biggest hurdle for a centralized computing platform is the network bandwidth. 5G will break this hurdle by providing speeds up to 10Gbps. This will make communication with the central server easier, and it can be used for processing.
  3. Quantum Computing: When quantum computers will be commercialized and will start being used as servers to become a part of the cloud, they will be able to provide huge computing power to small devices which will then allow them to connect to high-speed Internet.
  4. Internet of Things: Computing will become ubiquitous and devices will keep becoming smaller and cheaper in order to include full-fledged processors in them, which will not be a feasible option. In consequence, they will need to send the collected data to a cloud server which will analyse them to give instructions.

5. Machine Learning: Google announced in the GCP NEXT 2016 conference that it is going to make the process of data ingestion, storage, and training machine models as simple as calling an API. This will allow developers to focus on creating incredible new applications without having to understand complex concepts like neural networking. As machine learning becomes more advanced, the local general purpose servers will no longer be efficient enough to work on the neural networks. Such tasks for AI, Machine Learning, and Deep Learning computations will be carried out by Google’s application specific integrated circuits called Tensor Processing Units (TPUs) which are about 10 to 20 times more efficient than traditional GPGPUs cost-performance wise. However, most of the Deep Learning tasks are executed in a manner similar to batch processing, so it can be easily taken to the cloud where the cloud servers use the TPUs. Cloud TPUs are easy to program via TensorFlow, the most popular open-source machine learning framework. It is interesting to note here that even NVIDIA (one of the largest GPU manufacturers) is interested to compete with Google with its Tesla V100 GPGPU.

Aditya Singh

The Music Revolution

 

Spotify was founded by Daniel Ek and Martin Lorentzon in Sweden, and currently, has 140 million users.

Pandora Internet Radio was founded by Will Glaser, Jon Kraft and Tim Westergren and currently, has 81 million users

Apple Music is Apple’s streaming service and was introduced in mid-2015 and has and currently, has 30 million users

Warmup:

Streaming and sharing is a revolution in the music industry. From phonographs in the beginning to cassettes and CDs in the 20th century, music has come a long way. Music streaming in the 21st century has taken the internet by storm.

 

The Players:

Today, the giants in the music industry include Spotify, Pandora, and Apple Music. But that isn’t all; Jay Z bought the streaming service, Tidal, in mid-2016, and technology giants like Tesla also have their streaming services in the works. Celebrity Will.i.am has also decided to launch his own streaming service.The competition is tough.

 

The Battleground and Leaderboard:

Music streaming is a daily routine now. With a target audience of all ages, it’s a hard game. Music streaming is everywhere, whether you’re using a phone, laptop, desktop or tablet; whether you’re in New York City, or in the most remote location, you can stream music anytime, anywhere as long as you have an Internet connection. With the current scene, the leaderboard looks a little bit like this:

Services with the number of users(millions):

SoundCloud (175)

Spotify (140)

Pandora(77.9)

NetEase Cloud Music (55)

Jango (48)

Apple Music(30)

The Issues:

Music streaming has had its share of controversy too; from the fake artists controversy of Spotify, which reported that Spotify was promoting fake artists for its royalty system, to Apple Music’s controversy about paying artists. This game involves dirty play.

 

The Final Problem:

In a nutshell, music streaming is the largest tech industry with a tough competition, and a huge audience. Only one can survive.

 

Vinayak Pachnanda

The Journey of Pendrive: From flash drives to wireless stick by Kaushiv Agarwal

Technology changes very fast, from big computers to portable laptops; from snail-paced 2G to blazing-fast 4G. However, the most drastic change was in terms of storage of data: The Invention of the USB Drive.

USB flash drives were invented at M-Systems, an Israeli company, and a US patent was filed on April 5, 1999, by Amir Ban, Dov Moran, and Oron Ogdan, all M-Systems employees. The product was announced by the company in September 2000 and was first sold by IBM in 8MB capacity starting December 15, 2000. Since the December of 2000, this nifty little tool has evolved considerably.

First Generation:

An original IBM DiskOnKey USB flash drive, providing 8 MB of storage. In 2000, Lexar introduced a Compact Flash (CF) card with a USB connection, and a companion card read/writer and USB cable that eliminated the need for a USB hub.

Second Generation:

By 2003, most USB flash drives had USB 2.0 connectivity, which has 480 Mbit/s as the transfer rate upper bound; after accounting for the protocol overhead that translates to a 35 MB/s effective throughput.

Third Generation:

Like USB 2.0 before it, USB 3.0 dramatically improved data transfer rates compared to its predecessor. It was announced in late 2008, but consumer devices were not available until the beginning of 2010. The USB 3.0 interface specifies transfer rates up to 5 Gbit/s (625 MB/s), compared to USB 2.0’s 480 Mbit/s (60 MB/s)

Fourth Generation:

Manufacturers have announced USB 3.1 type-C flash drives with read/write speeds of around 530 MB/s. An example of this generation would be the SanDisk Connect Wireless Stick, a flash drive reinvented to work not just with your computer, but also with your phone and tablet. With the SanDisk Connect Stick in your pocket, bag, or across the room, you can wirelessly access your media, transfer large files, stream HD videos and music, and save and share media from your mobile device. Delivering up to 256GB of extra capacity, the SanDisk Connect Wireless Stick empowers your mobile lifestyle , whether you’re running a sales meeting or taking a hike in the woods. SanDisk Connect Wireless Stick lets you stream videos or music to as many as three other devices at the same time and gives you access to all of your things from your pocket. When you’re feeling old school, use the USB connector to plug in.

The Lost Legacy

Uncharted: The Lost Legacy

Release Date: 22 August 2017

Platform: PlayStation 4

Developer: Naughty Dog

This unexpected addition to the hit adventure game franchise has been highly entertaining and maintains the franchise’s reputation thoroughly.

It has almost been a year since the fan-favorite Uncharted 4, with Nathan Drake’s final adventure came to a close. Naughty Dog has once again proved that its line of PlayStation exclusives are not going anywhere.

The Lost Legacy is essentially an expansion to last year’s game rather than a whole new entry, so it makes sense that it isn’t as massive a jump. However, with two powerful female protagonists having great backstories, Lost Legacy represents its subject matter excellently.

They’re a far cry from the roguish charms of Drake and Sully, but they end up being more than competent replacements.

Chloe, a thief with stealthy assassination skills made her first appearance in Uncharted 2 and Nadine, a short-tempered mercenary made hers in Uncharted 4. Their chemistry works well with Chloe’s charming personality and Nadine’s tough attitude making the story solid.

PLOT

The story is based in India, with the two protagonists finding the legendary tusk of Ganesh, son of the Hindu god Shiva.

But as with every Uncharted game, another party is also competing to find the treasure, this time it is a ruthless warmonger, Asav.

Chloe and Nadine compete with Asav and his large army in order to obtain the ancient artifact. There are many close-calls and plot twists  which make the plot energetic.

GAMEPLAY

Throughout the subtle single-player story you control Chloe while Nadine is AI-controlled.

The nice addition this time is that if you ever get stuck in a puzzle, Nadine completes it for you. This helps in maintaining the fast paced flow of the game.

In combat, players can use long-ranged weapons such as snipers, and short-barreled guns such as pistols and revolvers. Handheld explosives such as grenades and C4 are also available. Though players can attack enemies directly, they have the option to use stealth tactics to attack undetected or sneak by them. The game also introduces silenced weapons and lock-picking, a new addition using which players can obtain ammo and side-treasures.

Apart from these additions, the Lost Legacy still includes the familiar adventures of climbing the mountain along with the loop of exploration, platforming, shooting, and puzzle solving that has become the core to the gameplay experience of the franchise.

On the multiplayer side, The Lost Legacy also has the familiar Third-Person-Shooter with a wide array of weapons and power-ups. It is good for playing a quick match or two, but gets boring and repetitive over a period of time. The survival mode is offered offline and online, providing a challenging way to sharpen your shooting skills.

GRAPHICS

The graphics speak for themselves; Naughty Dog has been the master of squeezing every last bit out of the PlayStation system. With the previous Uncharted games and The Last of Us on PS3, they pushed the boundaries of the system’s software making their games look visually stunning. This time, with the PS4 Pro’s 4K and HDR capabilities the game has raised the bar to a new high.

WRAP UP

The Lost Legacy is as fun and satisfying as the other entries in the series. It may not be as dynamic and emotional as Uncharted 4, but it is still an awesome way to spend the weekend.

PROS-

  • Powerful characters
  • Smooth shooting mechanics
  • Simple story
  • Smart puzzles
  • Great visuals
  • Free of cost if you bought Uncharted 4.

CONS-

  • Only 6-7 hours of gameplay.
  • A bit of a clichéd ending.

OVERALL: 4.5/5                

 

 

COL. SANDERS’ NEW FLAGSHIP KILLER

It took a couple of re-reads to fully let the impact of the news sink in. At first, it was just words that registered: China, KFC, Phone. Then the concept fully began to connect.

The Chinese are now making a KFC smartphone.

Why are the Chinese now making a KFC smartphone?

Don’t get me wrong, I love China. I mean, like 90% of the tech that you get is “Made In China”, even the ubiquitous iPhone: “Designed in California, Made In Taiwan”. Not just tech, but even food! Did you know that although KFC originated in the U.S., it’s now the largest restaurant chain in China? Now, on KFC’s 30 year anniversary, it’s partnered with Huawei, one of China’s biggest smartphone producers, to release a KFC phone.

You read that right.

Granted, a KFC special phone manufactured by the same guys who make the perfectly reasonable Mate and Honor line of devices wasn’t exactly the product the world was pining for, but doesn’t it warm your heart to be living in an era of gratuitous marvels of technology such as this?

First Ever Fried Chicken Phone! [I think]

It’s a special edition version of the Huawei Enjoy 7 Plus. It has a 5.5-inch 720p display, a Snapdragon 435 SoC, a 12MP camera, 3GB of RAM, and 32GB of storage that can be expanded up to 128GB via a microSD card. There’s a fingerprint scanner on the back and, with a 3,020MAh battery, it should last a while on a charge.

It’s admittedly not a bad phone for the price (162 USD, or about Rs.10,000), although the contending Redmi Note 4 does pop into mind as a rival device by a fellow Chinese manufacturer.

But come on, let’s get real. We all know you’re not buying this thing for the specs. Doing so would be like buying Air Jordans to go for a run. No, you’re buying this baby for the prestige afforded by being the owner of a limited edition phone (Huawei says they’re only make 5,000) with Col. Sanders on the back.

“Kentucky China 30 years from 1987 to 2017, 30 years accompanied by the taste of the times, suck refers to the aftertaste! Kentucky together with Huawei joint cooperation, the introduction of Huawei Chang enjoy 7 Kentucky commemorative version of gorgeous struck! Commemorative Edition laser back carving, pre-installed Kentucky Super APP, with WOW member 10 thousand K gold, but also the first to experience k-music song function. Limited to 5000 will soon be on sale, waiting for you to grab!”

A cursory examination of the the quoted text (which is a victim of some unfortunate translation) reveals the phone is the whole package, coming with ten thousand K-Dollars (the restaurant’s digital currency) and the ability to share songs from a playlist when on the premises of a KFC, which basically means they’re simultaneously re-inventing the jukebox. Such innovation is rarely seen outside keynote events, so take note.

If you’re dead set on getting mileage out of the phone, though, you’d ideally be Chinese, as the bands the phone uses may not work worldwide and the version of Android 7.0 comes without Google built into it, as it’s meant for the Chinese market.

Better reach into the bucket of cash and pull out some dollars, cause this phone be finger-licking good.

Ishir Bhardwaj

Brain-Computer Interfaces by Angad Singh

Brain-Computer Interfaces: An Overview

Picture this, it’s late at night and you have a big test tomorrow which you haven’t studied for. As you steadily move through the course material you wonder if there was a way by which you could just download information straight to your brain. That might be closer than you think.
Technology (like DARPA’s Neural Engineering System Design) now exists by which your brain can directly interact with a computer- this is called a Brain Computer Interface(BCIs).  Even though this might seem magical there’s actually something very simple behind this. Our brains have millions of neurons connected by Axons and Dendrites. When we think, move, or memorize something, our neurons work to transfer data through small electrical signals. These electrical signals move around our brain at speeds more than 250 mph! Although, the paths these signals take are insulated by a fatty white substance called myelin, some of them escape. These electrical signals are what allow us to interpret our thoughts and translate them into mouse-clicks and keypresses. Following animal-testing, the first prosthetics that could be directly controlled by their users’ brains came up in the 1990s.

A Brief History


Hans Berger discovered the electrical activity of the brain in 1924 and recorded it by the means of EEG or electroencephalogy.
In order to record these electrical signals, he inserted silver wires into the scalps of his patients, which were later replaced by silver foils attached to the patient’s head by rubber bandages.


Jacques Vidal, a UCLA professor, is widely recognized as the creator of BCIs. He coined the term and produced the first peer-reviewed publication on this topic.
Now, BCIs are most often used for researching, mapping, assisting, augmenting or repairing human cognitive or sensory-motor functions.

Devices like these can be very helpful for disabled people. DARPA is funding the research and development of BCIs, which allow blind people to see . This might sound complicated, but the concept behind this is fairly simple: researchers can figure out what electrical signals are sent to the brain when our eyes see the color red and then rig up a camera to do the same.

BCIs can also allow people to control prosthetic limbs with their brains. Due to the incredible cortical plasticity of the brain, signals from prosthetics can after some alteration, be interpreted by the brain in the same way as it would interpret signals sent by natural limbs.

There’s options for people who’d like to experiment with BCIs outside the lab. wyrm, a Python library, allows you to play around with EEGs and make your own brain-computer interfaces.

Angad Singh

Get With the Lingo by Aditya Joshi

What is it, really, that differentiates us from animals? Some would argue it’s our self-awareness, others would say it’s our capacity to think at a higher level, while others still would be adamant in their belief it’s the endless adaptiveness that humanity has that is our key advantage.
But I believe the answer is a lot simpler. What makes us special and successful as a species is our extraordinary ability to communicate with one another and share our experiences. Being able to pass on knowledge and learn from the experience of others ensures a steady progress; a continuous, ceaseless march towards advancement- fueled by a steady stream of gradually accumulating information, handed down from one generation to the next.
The impact on history cannot be overstated; all sciences, cultures, arts, history, music and even religions are made possible by the very existence of language. The fabric of society itself relies on our ability to communicate.
With the advent of the spoken word, hunter-gatherers no longer needed to taste a berry to test for its edibility, they could simply ask more experienced persons about its nature.
And, with this simple, often disregarded step, humanity was on it’s way to becoming the dominant species on this planet.
So how did it all begin? How did the spoken and written word decisively shape the course of history?
As for the first question, unfortunately enough, nobody is quite sure as to what the answer is. There are no known animals in a transitory stage from not speaking to speaking.
There is, however, a single, common theme that visibly stands out among all these theories: ‘The world’s languages evolved spontaneously. They were not designed’.
Either way, we don’t know for sure how it finally came to be.
What we do know is how language has impacted human history. A basic application could be a hunter in primeval times seeing a deer ripe for the killing, and subsequently letting out a grunt that informs his partner it’s time to start moving.
But, as human interactions grew more complex, the possibilities that language gave us increased exponentially.
The power of speech allowed great orators to bring together large groups of previously small, solitary tribes into consolidated units. People started to settle down and live with each other, and as this happened language once again propagated a culture of sharing, trading and collectiveness.
In such a manner, society was born.
However, the spoken word has limits. Information can only be disseminated to a fixed number of people at a time and there can never be a permanent record. Word-of-mouth accounts invariably end up getting distorted through the generations. Lastly, the rapid growth of cities demanded administrative measures that could simply not be carried out verbally.
And thereby came the written language. The oldest civilization in the world, the Mesopotamian, exhibits the earliest examples of written language coming to the fore.Through the Uruk period a script was developed and allowed city authorities to administer large groups of people. Intensive trade and contact between regions and maintenance of records were all enabled by language.

Sumerian Language Through Time

The above image shows the evolution of the sign for “head” in cuneiform script-the oldest known- over several millennia. It is apparent that written script developed organically, starting off clunky and gradually becoming streamlined and stylized due to scribes through the ages simplifying the symbols for their own purposes.
Language evolved just like a living being, growing and changing in accordance with the convenience of the times, driven forward not by a single individual but by a collective effort towards codification and the writing down of knowledge that seems to compel man.
Languages evolved from and through each other: English started off as a West Germanic language strongly influenced by Latin, which also heavily influenced modern Italian and in turn was born of the Etruscan alphabet.
With the invention of printing techniques in China and later with the invention of the mechanical printing press by Gutenberg suddenly meant that an individual could now spread all kinds of propaganda and ideas to people they\had never even met in person.
The Bible, the Communist Manifesto, the propaganda that fueled the French revolution; all events that changed the lives of billions and shaped the world as we know it; all the dissemination of knowledge between mankind that has even taken place, were all made possible by the distinctly human characteristic of voluntary communication.
And now, we stand at the doorstep of a new epoch.
Facebook’s AI recently came up with a completely new language, and two Facebook chat bots had the following conversation:
Bob: I can i i everything else
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else
Alice: balls have a ball to me to me to me to me to me to me to me to me.
Deep.
This raises questions about the direction in which language is headed. Insofar they have been governed by human need and initiative, now, it would seem, that it is set to be drawn forward by non-human interventions.
All of this, made possible by a single word millennia ago.
Indeed, the ability to type this article and yours to read it hinges entirely on an invisible library that you, me, and all humans, carry in their heads, one that lets them express their thoughts and feelings to each other and continue humanity’s’ ceaseless march towards tomorrow.