Tuesday, July 22, 2014
A LifeStraw is a filter designed to filter water so that user can drink it safely. It can remove 99.9999% of parasites and bacteria in water.
More to read: A Guide to Make Any of the Disney Theme Park Recipes
Pizza Scissors merges a scissor and a spatula so that you can easily cut and pick up a pizza slice without making any toppings fall off.
The idea of Bike Backpack is that you hike up a mountain with your bicycle on you back and then you can ride all the way down.
This is a cheap alternative for storing little pieces of jewellery and cash.
Charge electric vehicles parked next to a solar tree.
The AirMouse avoid repetitive stress injuries from normal mouse use. The Air mouse only works as a mouse when your hands are in flat.
This handle apple peeler makes it easy to peel apples. You can even shave off apple skins with just one hand.
These lazy glasses allow you to read in bed or watch TV while you are lying flat on your back.
Babies can now polish the floor as they learn to crawl.
With this extra wide rear-view mirror, drivers can expand their field of view to 180 degrees.
This storm umbrella is designed to protect you against all kind of weather conditions. It can withstand 100km/winds.
Just simply use small branches or twigs you find anywhere, and you can have clean-burning fire and electricity for your devices.
Computers can drive cars now. What does that mean for us as a society? As technology advances and computers learn to perform human tasks with greater efficiency than even humans themselves, how many of us will be left in the dust? Is there a way to ensure that we remain irreplaceable as workers?
The threat is still far in the future, so there’s no need to panic about it yet. However, the younger you are, the more you should be concerned. Which career paths should you choose to guard against automation takeover?
It’s one thing for technology to enable creative endeavors. Digital art tools like Photoshop and Illustrator have been monumentally useful for graphic artists. Camera advancements have made digital photography cheaper and more convenient than ever before. And how much music would we be missing out on if it weren’t for FL Studio and GarageBand?
Yet, even so, creative endeavors will never be replaced by computers. Art is an expression of human creativity, imagination, and improvisation – something that computers will never have.
Any child can say, “I’m a dog” and pretend to be a dog. Computers struggle to come up with the essence of “I” and the essence of “dog,” and they really struggle with coming up with what parts of “I-ness” and “dog-ness” should be usefully blended if you want to pretend to be a dog.
This is an important skill because creativity can be described as the ability to grasp the essence of one thing, and then the essence of some very different thing, and smash them together to create some entirely new thing.David Brooks, “What Machines Can’t Do”
The world is home to hundreds of millions of sports fans. As a species, we love to play and we love to watch others play, and sports are the perfect expression of our tendency to play. Would a stadium full of soccer robots be entertaining to watch? Perhaps for a little while, but only for its novelty.
Sports are compelling because of the human narrative that lies under the surface. We aren’t so much drawn to a sport itself as we are to the players of that sport. The history, the rivalries, the athleticism, the stakes – that’s what we want to watch and computers will never be able to replicate that kind of excitement.
Healthcare & Medicine
On the one hand, the aspects of medicine that are entirely based on medical knowledge, technical expertise, and data analysis could be reasonably automated without much consequence. However, there are elements of healthcare that computers just aren’t capable of handling: bedside manner, making tough decisions from incomplete patient data, dealing with human psychology, and so on.
At the very least, there are a whole host of legal issues that would arise from putting a patient’s life in the hands of a medical robot that might malfunction and make a wrong decision. That threat alone would ensure that humans always have a place in healthcare.
Future technological advancements may change the landscape of education, but will never eradicate the need for human teachers. It’s true that online course sitesare increasing in popularity, but the fact remains that the content of online courses doesn’t just materialize out of thin air. Someone needs to create it.
And what about teaching subjects that aren’t as objective as science and math, that aren’t simply based on knowledge? Would a computer be able to understand the nuances of music, art and literature, let alone teach it in a subjective manner? The possibility of that is doubtful, and even if it were to come to pass, it wouldn’t be for a long time.
Just as past mechanisation freed, or forced, workers into jobs requiring more cognitive dexterity, leaps in machine intelligence could create space for people to specialise in more emotive occupations, as yet unsuited to machines: a world of artists and therapists, love counsellors and yoga instructors.
Such emotional and relational work could be as critical to the future as metal-bashing was in the past, even if it gets little respect at first.The Economist, “The Future of Jobs: The Onrushing Wave”
Plus, there will always be a demand for personal tutoring. Even if classrooms and courses could be taught without human involvement, computers will never be able to personalize the material on a student-to-student basis. For that, humans will always be needed.
For as long as automation has been a part of the human economy, there have always been mistakes. Machines break down. Metals rust. Cogs can wear out and motherboards can fry. Under perfect conditions, quality assurance wouldn’t be necessary. But in the real world, an error will crop up somewhere along the line and nobody but a human will be able to spot it.
Why not just create QA machines that look for errors? Because then you enter an infinite regression. What happens when the QA machine itself breaks down? Will there be a second QA machine for the first QA machine? At some point, you’ll need a human.
Politics & Law
Depending on how cynical you are about the state of world politics, politicians may as well be robots already. However, if we’re going to be serious about it, then it’s reasonable to assume that computers will never overtake the realm of politics.
Computers won’t be placed in charge of towns, cities, states, or countries. Computers won’t be creating new laws. Computers won’t be making judicial decisions. Governors, lawmakers, judges, and juries will always need some sense of human discernment that computers will never be able to offer.
In the end, the answer to “Which jobs are safe from computers?” is quite simple. Avoid the overlap between humans and computers and look at jobs that require an element of human behavior that computers cannot replicate: intuition, creativity, innovation, compassion, imagination, and so on. Those jobs will always be safe.
And what happens when computers become capable of those human traits? At that point, the distinction between humans and computers would be too blurred, and then the entire question becomes irrelevant.
What other jobs will computers never replace? Share what you think and comment below!
Image Credits: Ballerina Via Shutterstock, Sports Via Shutterstock, Stethoscope Via Shutterstock,Guitar Tutor Via Shutterstock, Metal Gears Via Shutterstock, Gavel Via Shutterstock
1.2 million people die in car accidents every year, and 50,000 are maimed. We could save almost all of those lives. Millions of people waste billions of hours commuting . Now they can work, watch Netflix, or read a book. Robot cars would let us get rid of parking lots and traffic jams.
Blind people, the elderly, and people too young to drive would be able move around freely without a human driver. The savings in lives, dollars, and productivity is incalculable. Machines don’t get drunk, tired, or distracted. They follow traffic laws exactly. These are things that everybody wants, with far-reach implications — the hundred billion-dollar question is, how long is it going to take us to get there?
A World of Driverless CarsGoogle describes the project in a recent blog update like this:
“Ever since we started the Google self-driving car project, we’ve been working toward the goal of vehicles that can shoulder the entire burden of driving. Just imagine: You can take a trip downtown at lunchtime without a 20-minute buffer to find parking. Seniors can keep their freedom even if they can’t keep their car keys. And drunk and distracted driving? History. [...] they will take you where you want to go at the push of a button. And that’s an important step toward improving road safety and transforming mobility for millions of people.”
Autonomous cars have been something of a hot topic in recent years, with Google leading the charge. Google has driven its fleet of experimental robot cars more than 1.1 million kilometers without serious incident, and recently premiered a new low-speed electric prototype to fine-tune city driving - with no steering wheel or brakes whatsoever.
Outside Google, Toyota, Honda, and Ford all have their own self-driving car projects, although none of them are nearly as advanced as Google’s. In fact, several automakers have dismissed the idea of fully autonomous cars out of hand as too challenging, focusing instead on driver assistance features.
Google, for its part, has outlined an aggressive timeline to commercialization, hoping to partner with automakers to release autonomous vehicles, running Google software and manufactured by third parties before the close of the decade. In fact, Google intends these vehicles to hit the market no later than 2018. So what’s standing in the way of that goal?
Technological ChallengesGoogle’s prototype is really, really good — but it isn’t perfect. Here’s how the car works now:
The primary sense organ of the robot is a spinning LIDAR turret on the roof of the car. The LIDAR turret paints the world around the car with an infrared laser beam at very high speed. By recording the position and intensity of laser light reflected back, a simple machine vision algorithm can quickly compute a three-dimensional map of the objects around the car many times a second, allowing it to identify objects like cars, pedestrians, sidewalks, and traffic cones.
The car, as a secondary sense, has a number of cameras that it uses to gather additional information about the world around it (identifying signals from cyclists and other cars, and reading the status of traffic lights and signs). Finally, the car has a GPS, which tells it, to within a few meters’ accuracy, where it is located in space.
None of these senses are good enough, on their own, to direct the car, but by using clever software to fuse these data sources together, the car is able to make intelligent driving decisions. To make the task easier, Google has been using streetview cars with LIDAR turrets on them for years – cars that, along with providing you with weird journeys into the past, have been systematically 3d mapping streets all over the world.
All of this data has been meticulously tagged to let the car’s computer know the positions of traffic lights, and what the speed limits and lane designations are for each road.
The robot can fine-tune its GPS position by comparing its current LIDAR data to old 3d maps of the street it’s on, to ensure that it doesn’t drift out of its lane (this also allows it to navigate when GPS isn’t an option, like when it’s driving through a tunnel or a parking garage). Furthermore, the car can access the metadata for its local environment to tell it when the speed changes and to know where to look for traffic signals.
This combination of hardware and software can do a lot of remarkable things: it can see and predict the motions of cyclists and pedestrians. It can identify construction cones and roads blocked by detour signs, and deduce the intentions of traffic cops with signs.
It can handle four-way-stops, adjust its speed on the highway to keep up with traffic, and even adjust its driving to make the ride comfortable for its human payload. The software is also aware of its own blind spots, and behaves cautiously when there might be cross-traffic or a pedestrian hiding in them.
There are, unfortunately, also some things that the car can’t do. The biggest issue is weather: Google’s cars have mostly been tested in California. In a larger roll-out across the world, autonomous cars will need to cope gracefully with flash flooding, heavy fog, and deep snow. Which is a problem, because all of those seriously mess with the heavy lifter of the robot’s senses: the LIDAR.
Snow and standing water scatter the laser beam, making it difficult to reliably collect data, and fog or heavy rain can dramatically cut the distance the LIDAR can see. Without a reliable LIDAR, the robot is literally dead in the water.
Fixing the weather problem is still an open area of research. If we’re lucky, it may be possible to use clever noise-filtering algorithms to extract meaningful data even from weather-clouded LIDAR, or shift the burden to the cameras, allowing the robot to continue to maneuver, although probably at a reduced speed.
If not, it may be necessary to add a new suite of sensors (perhaps SONAR or RADAR) to give the robot 3d mapping capabilities even in the event of LIDAR failure. Either way, Google’s working on it.
A deeper problem, though, is what’s called the long tail. Think of it like this: the majority of driving that self-driving cars will be asked to do is on the freeway. For a robot, freeway driving is easy. The next use case would probably be low-speed city driving in good weather, which robots are also pretty good at.
Unfortunately, even though these represent probably 90% of all the driving situations the cars will ever face, they aren’t the only two possibilities. What about parades? What about ambulances? Rock slides? Car accidents? Flat tires? Jaywalking dogs? Road construction? Tornadoes? Getting pulled over by the police?
The point is that as you go down the list of cases the car has to handle, sorted by probability, you find that there are an almost infinite number of them, each with a tiny slice of the probability pie. You can’t hard code behavior for every possibility.
You have to accept that eventually your robot car will encounter something you didn’t plan for, and will behave incorrectly. It might even get people killed. The best you can do is try to cover enough cases well enough that the robot is still safer to use than a human-directed car.
Right now, the Google car isn’t quite far enough down that list yet, but it is starting to get close, and Google is working on developing safe fallback behavior to ensure that the car won’t actively harm anyone, even in the case of software failure or unanticipated driving conditions.
Google’s method of building up these cases is clever: the company has a policy that when the car makes an error, or a human is forced to take control, the incident is logged, and the software is revised until it can pass simulated versions of the same scenario. Any large-scale changes to the software is tested against this database of incidents to ensure that nothing has been inadvertently broken.
There are softer limitations as well – the LIDAR turrets used by the robots currently clock in at more than $30,000. The good news here is that this is largely because those LIDAR turrets are a specialty item used for only a few applications. Mass production will certainly bring those costs down.
Furthermore, if self-driving cars are adopted under the cab model (likely provided by Google’s protege, Uber), the needed ratio of cars to car users will likely be low: people going to similar places can be carpooled together by centralized routing software in exchange for reduced fees, and cars can maintain more or less continuous usage. This reduces the cost per user dramatically, even if the cars themselves are very expensive.
Legal ChallengesSelf driving cars sound pretty much like a grocery list of things that scare regulators: autonomous robots with lethal force, disruptive new technology, mechanized unemployment, and large corporations putting millions of cameras all over the world.
Robot cars will probably kill people (though at a rate much lower than human drivers), they’ll displace millions of truck drivers and hundreds of thousands of cab drivers, and they’ll provide Google with an enormous amount of personal data about their users. Needless to say, there’s going to be some resistance to getting self-driving cars legalized, particularly since they require major overhauls to the regulatory infrastructure already in play.
In order for self driving cars to become a legal, mainstream part of our lives, we’re going to have to give up on some very old legal precepts: including the idea that the human being in the driver’s seat of a car is responsible for its actions.
The states that have issued preliminary regulation to allow for the testing of autonomous vehicles (including California and Nevada) have employed a variety of legal shortcuts to allow the research to take place.
In California, for example, the person who initiates the car’s journey is legally the operator, even if they aren’t actually in the car at the time. This is an obviously inadequate long-term answer, as this means that (for example) the operator could be charged with DUI, even if they were nowhere near the vehicle that they dispatched while drinking.
California hopes to release more permanent regulation for such consumer vehiclesby early 2015, but Consumer Watchdog, an independent advocacy group, is lobbying for them to delay the regulation for eighteen months to allow more thorough safety testing.
Google hopes to encourage lawmakers to place liability for the car’s actions with the manufacturers of the self-driving hardware, which they see as the fairest way to distribute blame: it seems silly for the law to hold a human operator responsible for behavior that they have no control over.
The regulators involved admit that legislating for autonomous vehicles is a difficult problem:
“We’re really good at licensing drivers and regulating vehicles and the car sales industry, but we don’t have a lot of expertise in developing those types of standards,” Soublet said. “So as we start approaching things like that, we have to back off. We don’t have the technical ability to do it. We have to come at this from a regulatory perspective of what we as a department are capable of.”They do, however, agree that the field is worth the effort.
“It’s an issue that draws you in. It’s our future. We find it very exciting to work on [...] Brian [Soublet] and I, we can’t believe that we’re working on this. It’s something that will change the way that we all live.”Federal regulation is on its way, but may not arrive for several years. The National Highway Traffic Safety Administration released a preliminary statement on the issue, in which it expressed some enthusiasm for the prospect of fully autonomous vehicles.
“America is at a historic turning point for automotive travel. Motor vehicles and drivers’ relationships with them are likely to change significantly in the next ten to twenty years, perhaps more than they have changed in the last one hundred years.”However, the NHTSA also seems unprepared to issue any clear regulation in the forseeable future, and plans mostly to leave these regulatory issues in the hands of individual states, raising the possibility of poorly regulated states being ‘dead zones’ that autonomous cars on cross-country road trips must avoid. This is where the good news starts. The hopeful mother of these machines is Google, which also happens to be one of the largest lobby juggernauts in the United States (it ranks eighth, beating out Boeing and Lockheed Martin). Google is well prepared to guide regulation into a shape friendly to the future of autonomous vehicles.
The Road AheadIf there’s a simple message to take away from the situation right now, it’s this: the challenges left to solve before autonomous vehicles can go mainstream are difficult, and substantial. The technology and legal infrastructure is not currently in place to allow these vehicles to truly fulfill their potential. However, these problems are also well defined, solvable, and being investigated by some of the smartest people on the planet.
There is a very good chance that the technology, at least, will be ready to be deployed in test markets like California and Nevada by Google’s tentative 2018 date. There’s an even better chance that, by ten years from now, the technology will have radically transformed the way that nearly everyone on Earth lives their lives.
These changes will range from car culture (the end of automobile ownership as an adult rite of passage), the way people work and socialize, and the way we design our cities. If these challenges can be met, it’ll be the most significant change in transportation since the invention of the automobile.
Feature Image: “The Love Bug“, by JD Hancock
Images: “Google self driving car at the Computer History Museum“, by Don DeBold, “Google Self-Driving Car“, by Roman Boed, “Toyota self-driving car“, David Berkowitz, “Velodyne High-Def LIDAR“, Steve Jurvetson Source: www.makeuseof.com
Images: “Google self driving car at the Computer History Museum“, by Don DeBold, “Google Self-Driving Car“, by Roman Boed, “Toyota self-driving car“, David Berkowitz, “Velodyne High-Def LIDAR“, Steve Jurvetson Source: www.makeuseof.com
iOS 7 already has this feature for many apps, and so do the custom skins on the LG G3 and HTC One M8. Wherever you turn, it seems that colorful status bars are the way of the future — even Android L is supposed to bring more colorful status barsalong with it. But if you’re impatient for the awesome new look, you’ll want to try out this method.
Before And After
On the left is what most Android status bars look like. Along the top of your screen is a black bar that is always there no matter what app you open. But on the right, you can see what the status bar looks like when the Facebook app is open with this mod. It’s amazing how such a small change can make such a huge visual difference.
You can also add a gradient to more easily distinguish the status bar, and if you’re using software buttons, you can even set up the navigation bar (the black bar along the bottom with Back, Home, and Recents buttons) to change color as well. Stick around till the end to find out how.
Setting Up XposedThe first step in this process is to download the Xposed Framework. This amazing Android app allows you to tweak your device as much as you want without having to flash a custom ROM. Keep in mind, though, that you do need to be rooted. If you have no idea what I mean by this, check out our simple Android rooting guide.
We’ve covered Xposed in depth before, as it really is worth your time, but let’s cover briefly how to set it up before getting started. First, go into your settings and under Security, check the box for “Unknown Sources”, which allows you to install apps from outside the Google Play Store.
Next, download the installer from the Xposed website and run it, as shown above. However, Xposed by itself is just a framework for all of the awesome modules that can run on top of it. To change the color of your status bar, you’ll need a module called Tinted Status Bar.
Installing Tinted Status BarNow that Xposed is installed, open it up from your app drawer like a regular app and navigate to the Settings. Because Tinted Status Bar is technically still in Beta, you need to tell Xposed to show Beta modules as well as Stable modules. If you’re feeling particularly curious, you can even choose to go for Experimental builds.
With that done, navigate to the Download tab and search for Tinted Status Bar. Once you find it, swipe to the right to the tab called Versions and download the latest version of the app.
Once installed, you need to make sure that Tinted Status Bar is activated under the Modules section — then reboot your device.
Changing The Color Of The Status BarWith Tinted Status Bar up and running, you can now find it in your app drawer like any other app. Best of all, you can make any changes you want and they don’t require a restart.
Some apps, like Google+, already have a defined “Action Bar” color and will change the color automatically — but most will not. To manually adjust them, find your app under Per App Tints, make sure the slider at the top is set to On, and select “All activities, below settings override this”.
From here, you can choose if you want to link the status bar and nav bar colors (if you have a nav bar on your device) and if you want the status bar to stay the same color even if the action bar goes away temporarily within the app.
You can then tap on one of the four colored squares to adjust their color and their opacity. If you’re having issues, make sure the opacity slider is all the way to the right. You can then adjust the value or type it in to reach your desired color.
But what if you don’t know the color code for Facebook blue? No problem. That’s where Color Picker comes in.
Once you download this app from the Play Store, you can take a screenshot of the app you want to know the color of (in this example it is Play Music) and then open Color Picker and select the screenshot. Simply tap on the screen anywhere on the colored portion of the action bar, and it will tell you the color code.
TroubleshootingIf you’re using a custom launcher that allows for a transparent status bar and/or navigation bar, you may run into an issue where the bars turn black instead of transparent on your homescreen or lockscreen. If this happens, the fix is surprisingly easy.
Navigate into the settings and uncheck the Respect KitKat APIs box, then go find your Launcher and set its opacity to zero.
If some of your apps get black text in the status bar when you want white text, change the “Max HS[V] for black icons” value to a higher setting, and that should take care of it.
If you want to have certain parts of an app display a different color than other parts, use the “Toast activity titles” button to display toast notifications of each activity as it starts. You can than go into each app and set custom colors based on the activity within the app.
Head on over to the original XDA thread and do a quick search there if you run into any other issues. Chances are that someone else has already asked the question. If not, feel free to ask it yourself.
What Do You Think?
I absolutely love the new colors of my status bar and nav bar, and I could never go back to just plain black now. If you appreciate this mod as much as I do, head on over to the Google Play Store and buy the Tinted Status Bar Donation for $2.02. It doesn’t provide any additional functionality (the app is completely free and ad-free!) but it does support the developer.