Archive for the ‘technology’ Category
Imagine buying your SIM-free mobile phone from a local electronics store and logging into your Google or Apple account as soon as you turn the phone on for the first time. Then imagine having the phone ready to use for voice calls with a phone number provided to you by Google Talk or Skype, and ready to access email, YouTube or Facebook.
That same phone automatically hooks to your home Wi-Fi or any of the available 3G, WiMax or LTE networks without you even knowing (or caring) which specific network its running on at the moment. No longer do you have to belong to a specific carrier — your phone automatically picks the strongest and cheapest network option at any given time. Your network access, along with voice, app/in-app purchases and everything else are provided to you by the mobile platform provider. The carriers are only there to run network infrastructure and sell bandwidth to two to three mobile platform providers.
Let’s face it, the only two things that still connect carriers to consumers are the voice number and billing for the network access. SIM card technology is rudimentary — you can easily conduct user authentication using a simple login, just like Apple does on iPods when you want to buy apps or songs from the iTunes store.
Looking into the future, even the phone number itself will disappear. Why bother with all these numbers when you can just place a call directly to anybody’s Facebook profile?
This future is inevitable, and the changes are coming very soon. With mobile platform providers running the show today, carriers simply have no way of stopping the process. Not having any control over the platform vendors — for instance, via a consortium that would centrally license Android or other mobile platforms to equalize the balance of power between the platform provider and the carriers/OEMs — they will eventually give up on their ambitions to control the user. Just read the Google/Motorola/Skyhook story to see how it happens.
It only takes one carrier to crack and start selling bandwidth to Google, Microsoft or Apple; all other carriers will simply have no choice but to follow. It’s like the prisoners’ dilemma from economic textbooks: If both prisoners don’t talk, both win. But if separated and one is promised a way out (or an easier sentence) and he talks first, then game theory suggests the winning strategy for each prisoner is to talk. In other words, one of them will crack. They are nowhere close to being united enough to stand together, even in the short to mid-term. Look how effortlessly Apple, then everyone else, took over their app distribution businesses — something that only five years ago would have been totally unthinkable.
Most likely, these first-to-crack carriers will be tier-two low-cost carriers outside the U.S., possibly acquired by, but likely just partnering with, the big platform players. Those carriers will have a high incentive to enter such partnerships, as their networks are already optimized for low costs (lean, efficient cost structure without heavy marketing, support, premium services overheads, better network logistics, etc.). Short to mid-term, the strategy will be against tier-one carriers, who have a high marketing/operations cost burden. The UK actually looks like a very logical place to start, especially when some UK carriers have already been experimenting with Skype phones, which were successful to the degree that price-sensitive younger audiences actually started to carry Skype phones as their second device.
It will probably be a while before most users fully switch to non-carrier-provided voice/network services — maybe five to seven years — but it’s only a matter of time, as the new model is so much more compelling to the consumer. Signing up for multiple phone numbers as easily as opening email accounts, getting the best and the cheapest network at any given time in any spot (finally, no more service drops!), free and unlimited voice/video on WiFi networks, cheap roaming even when overseas on a local service, and so many more benefits are poised to take off.
Once this happens, carriers fall into a very undesirable position. Network access becomes an absolute commodity, much more so than in the case of landline ISPs. The latter at least have relatively high switching costs, while a mobile phone is already connected to every network available in its physical location. This means carriers compete head to head over who sells the cheapest bandwidth to Google, Apple or Microsoft, and only those most economically fit with the strongest network logistics survive in the game. This time, the brand, handset subsidies or any other marketing tricks are of no help — it’s all about economics.
What’s really interesting is what could happen with next-generation networks. As carriers see their margins disappear almost entirely and the profits shift to mobile platforms, operators won’t accumulate enough profits to be able to invest in next-generation networks. Nor does the marginalized economics of the network business promise them high ROI. Mobile platforms do the opposite: By that time, they’ll have accumulated profits for all the value-added services, so they’ll have both the money to invest and the strong economic incentive to do so. This will also be very lucrative to mobile platforms politically, as owning services end to end, from cloud to network to devices, enables a whole new level of control and market power.
by Ilja Laurs is CEO at GetJar
Neat little portable pack for charging all Apple products while on the move.
Aviiq’s new Portable Charging Station -acts as a sort of USB hub in a bag, this little black travel sleeve lets you pack and power three USB devices — even an iPad — with one outlet. What’s more, the station allows for easy syncing by way of a retractable USB port.
Cost – $80
Do the world top personalities do thier own signatures in hand on every document? Did you ever receive an autograph of a celebrity by post?
Well it seems that, one can put his signature onto a document, even if he is miles away from it.
Apparantly thats what President Obama did.
Well, President Barack Obama has been in Europe for the annual G-8 summit and Congress was racing to pass legislation extending the authorization of key surveillance methods used to try to thwart attacks on the United States, which were due to expire Thursday night at midnight. Congress came through just hours before midnight but Obama was in France.
According to a White House spokesperson, Obama used a device called an autopen, which mechanically reproduces a human signature.
That prompted at least one lawmaker, Georgia Republican Representative Tom Graves, to question whether that was legal or not, writing Obama a letter seeking clarification.
“I thought it was a joke at first, but the President did, in fact, authorize an autopen to sign the Patriot Act extension into law. Consider the dangerous precedent this sets. Any number of circumstances could arise in the future where the public could question whether or not the president authorized the use of an autopen,” Graves said in a statement.
Well, for those who may question the use of the auto-pen, there is a legal opinion issued during the Bush administration that gave the green-light to using it.
“We conclude that the President need not personally perform the physical act of affixing his signature to a bill to sign it within the meaning of Article I, Section 7″ of the U.S. Constitution, the 29-page opinion said.
“We emphasize that we are not suggesting that the President may delegate the decision to approve and sign a bill, only that, having made this decision, he may direct a subordinate to affix the President’s signature to the bill,” it said.
Smarter Task-Transfers Use Mobile Cameras
MIT and Google have devised a method of transferring tasks between your smartphone and your computer by merely pointing the cell-phone camera at your PC’s screen.
How many times have you found a Web page on your smartphone that you want open on your desktop computer? Or perhaps you were viewing your destination on Google Maps and want to transfer that to your smartphone. Using an experimental app called Deep Shot, you can do these things by merely aiming your smartphone’s camera at your computer.
Deep Shot was developed by MIT (Massachusetts Institute of Technology) and Google for a paper presented at the Association for Computing Machinery’s conference on Computer-Human Interaction. The app was created by MIT doctoral candidate Tsung-Hsiang Chang in MIT’s Computer Science and Artificial Intelligence Laboratory, and Google’s Yang Li.
The app works by taking a photo of your computer’s screen, and, using pattern recognition algorithms, it ascertains what program you are currently running and the document you have open. It then transfers that information from the desktop computer to your smartphone. And in the case you want to reverse the direction, the pattern recognition will ascertain which desktop computer it is pointed at and then transfer the currently open file on the smartphone to the desktop computer after opening the appropriate program there.
Google’s Deep Shot, developed with MIT, transfers tasks from the desktop to a smartphone, or vice versa, by merely pointing the phone’s camera at the PC screen. (Source: MIT)
Deep Shot works by encoding the currently running program and open file using an extended version of a standard universal resource identifier (URI, of which the more familiar universal resource locator, or URL, is a subset). After identifying the desktop computer (when transferring from a smartphone) or the application running on the desktop (when transferring to a smartphone), Deep Shot encodes the software’s state and sends the URI wirelessly to its companion.
Do not look to download the app just yet. This work is still in the proof-of-concept stage. To that end, the team has adapted several popular applications, including Google Maps and Yelp (the social networking and peer review site), to work with the app. But in order to use Deep Shot with other programs besides Yelp and Google Maps, application developers will have to build-in the ability to read and write URI’s. If commercialized by Google, an application programmer’s interface (API) would have to be published by Google and adopted by other application developers.
“I find it a really compelling use case, so I would really hope that companies like Microsoft would really consider adding it,” says Jeffrey Nichols, a researcher at IBM’s Almaden research center who specializes in mobile devices. “I see it being much more likely to happen with Websites than with desktop applications. On the other hand, to some extent, we’re moving away from desktop applications and moving more and more to the Web, so it’s not clear to me how important it is that we really bring all the native application developers into the fold.”
What’s a FabFi?
FabFi is an open-source, FabLab-grown system using common building materials and off-the-shelf electronics to transmit wireless ethernet signals across distances of up to several miles. With Fabfi, communities can build their own wireless networks to gain high-speed internet connectivity—thus enabling them to access online educational, medical, and other resources.
FabFi is a user-extensible long range point-to-point and mesh hybrid-wireless broadband transmission infrastructure. It is based on the simple idea that a network of simple, intelligent, interconnected devices can create reliable networks in unstable environments. We use simple physics to make low-cost devices communicate directionally for very long distances (physics is cool!), and flexible configurations to adapt to a large variety of conditions.
For extreme conditions, we mount commercial wireless routers on fabbed RF (Radio Frequency) reflectors with a wire mesh surface that redirects the RF energy. Reflector gain depends on the materials used and the size of the reflector, but has been measured as high as 15dBi with some of the current designs.
A single wireless link in the FabFi system consists of two reflectors with attached wireless routers. Similarly, two routers can be linked with a wired connection. A single router can be linked to both wired and wireless connections at the same time. The system is configured for individual links to be combined in numerous ways, creating links that cover very long distances or service many users in a small area. A key component of this linking is called “meshing”. A mesh network is one where any device can be connected to one or more other neighbor devices in an unstructured (ad-hoc) manner. Mesh networks are robust and simple to configure because the software determines the routing of data automatically in real-time based on sensing the network topology. Traditional mesh networks are limited in scale because they rely on single radio, wireless-only connections and omni-directional antennas. By using directed wireless links and wired transfers whenever possible, the Fabfi system is optimized for building very large-scale static (as opposed to mobile) mesh networks. With Scale comes the potential for robust digital communities within a region without dependence on high-bandwidth local uplinks, which are expensive and unavailable in many places. Check out the animation for a little more detail (2MB, might take a while to load):
How Reflectors work
FabFi reflectors use the property of parabolic shapes (Y=cX^2) that a when a vector travelling perpendicular to a parabola’s directrix hits the surface of the parabola it is reflected to the parabola’s focal point. (see Mathworld for more on this…) By attaching a RF reflective material such as window screen or chicken wire to a frame that forms the shape of a parabola in three dimensions and then attaching our wireless router to the reflector at the focal point we can precisely concentrate and direct the RF energy coming from the router in transmission and efficiently collect RF energy from the paired router in reception.
An essential component of the FabFi system is it’s flexibility to be implemented with whatever materials are locally available. All that’s required is the ability to print out a 2D design file and create the pieces out of whater material you can find. If you have a Fab Lab, you can use a laser cutter or CNC wood router to create reflectors directly from wood, metal or acrylic, but there’s no reason they can’t be molded from clay, carved from stone or chiseled out of a block of ice as long as there’s a way to attach a metallic RF reflective surface to the front.
Three different reflector designs were implemented in Jalalabad during the inital deployment in January 2009: a large 4′ wooden version, a 2′ wooden version and an 18″ acrylic version. Reflective surface materials included chicken wire, woven stainless steel mesh and window screen.
Needs in the field subsequently drove the development of modified reflector designs with integrated weatherproofing and and fastner-less assembly. These new designs debuted in the summer of 2010.
It was not long afterward, however, that network users began designing and building their own reflectors out of locally sourced scrap materials. While still in need of significant refinement, these reflectors are clear physical signs of technology transfer and local human-capital development in the technology domain. They also cost less than $3US!
Routers and Firmware
FabFi uses an open source 3rd party firmware called OpenWRT on all of its routers. Taking advantage of OpenWRT’s linux-based flexibility, FabFi devices can run a wide range of network monitoring and self-diagnostic tools. The current system supports real-time network monitoring, local web caching centralized access control, user management and usage tracking (for billing). All of this is performed on devices costing $50-$100USD. Automated configuration has been steadily improving since the bygone days of the FabFi 1.0 release. We now support multiple routers across multiple fabfi distributions, and have the ability to configure networks with 802.11n speeds.
In developing places, reliable power is an ongoing challenge. Conveniently, all of our currently supported devices will run on 12VDC, and can be easily powered directly from a car or small engine battery. A car battery and a couple of inexpensive chargers function as reliable UPS devices on two major distribution hubs in the Jalalabad network, powering a bank of routers for nearly two days without city power. In Kenya, we have designed a “node in a box” that provides UPS, mounting and weatherproofing to every node in the network, supporting mains or solar power. Future development is planned for a bare-bones 12V-12V UPS that can be integrated into installations by plugging the provided 100-240VAC switching power brick into the fabbed UPS and the UPS into the router. Wind and other locally harvested powered charging circuits are a parallel FabLab project.
The Fab Future Despite te cobbled-together aesthetic, Fabfi has proven incredibly reliable in Afghanistan’s harsh climate (it reaches 130degF in Jalalabad in the summer with regular sandstorms). Beginning in the summer of 2010, we have expanded the fabfi system to provide direct wireless access to client devices and have been running a community-scale wifi ISP. In more than two years of deployment, we can still count the hardware failures on one hand. To our surprise, the biggest challenge so far has been uplink bandwidth. While many countries tout “mobile broadband” as the solution to universal access problems, the ground truth in most places is that mobile devices alone do not provide sufficient performance (or affordable enough prices) to be viable without some help. In Kenya, fabfi provides a value added service to communities where mobile connectivity is the only means of access by decreasing the data throughput per user and making it possible for providers to buy bandwidth in bulk.
A disposable camera the size of a grain of salt soon could be as much a part of the operating room toolkit as the traditional scalpel.
Called the NanEye, this tiny device eventually could wend its way into cameras, traffic lights, military equipment and a host of other items designed to protect, communicate and conduct surveillance. Developed by Fraunhofer Institute for Reliability and Microintegration (IZM) in Berlin, Germany, in partnership with AWAIBA of Portugal and with the support of the Fraunhofer Institute for Applied Optics and Precision Engineering IOF in Jena, Germany, the micro-camera has a lens that is 1 by 1 by 1.5 millimeters—just large enough to be seen by the naked eye.
The NanEye is inexpensive to make, so low-cost in fact that it is viewed as disposable, which changes the way in which medical facilities, researchers and others can use the device. In fact, Fraunhofer Institute originally developed the micro-camera in conjunction with AWAIBA, which manufactures digital camera sensors, for use in medical endoscopes in order to more easily and clearly view all internal areas of the human body.
An endoscope consists of a camera at the end of a tube, which contains a wire that transmits an image of a patient’s organs to a computer. Doctors use the tube to manipulate the camera through parts of the body, such as the gastrointestinal tract. Typically, an endoscope device costs about $25,000, and must therefore be sterilized and reused, further increasing the cost of maintaining the equipment. An endoscopy, which generally takes 20 minutes to 60 minutes, costs each patient between $800 and $2,000, according to Buzzle.com.
The micro-camera, slated to become available in 2012, will allow health care facilities to reduce the cost of the procedure and do more procedures each day due to reduced time allocated to cleaning the equipment.
And the small device could extend well beyond the reaches of the human body. In fact, the micro-camera’s low cost, coupled with its small size, is encouraging the developers to consider other markets such as the automotive industry, where car makers could use the small cameras to replace side-view mirrors, and government, where agencies could use the tiny cameras for surveillance and national security.
The design works because Fraunhofer researchers allowed connections to occur on the back of the sensor, not on the side, meaning a wafer of lenses could be mounted and electrically wired to the sensor wafer and the stack could be broken into 28,000 devices. In the past, a wafer would be chopped into 28,000 single sensors and lenses then would be attached. As a result of the new design, each micro-camera is much smaller for a lower cost, delivering a resolution of 250 by 250 pixels at a frame rate of 44 per second.
Animators, illustrators, design engineers and others creating 3D images have been deprived of one of the wonders of the modern age: copy and paste. But a new system promises to bring the power and convenience of copy/paste to the world of 3D image editing.
Wan-Yen Lo, of UC San Diego and University of Bern, along with Jeroen van Baar, of Disney Research Zürich, and their colleagues, has developed a software system for stereoscopic copy and paste that will greatly simplify editing of 3D images. This development is just in time for what could be a proliferation of such images.
The boy and other elements were copied and pasted from a different image using new stereoscopic editing technology (source: ACM Siggraph and the researchers).
Composing a realistic-looking 3D image from different source materials currently requires meticulous, tedious work. Typical tasks include essentially painting shadows, textures and other artifacts by hand. Spatial and lighting differences between the copied graphic and the image it’s pasted into have to be resolved. To avoid looking distorted, an image needs to include all the visual cues our eyes and brain know from reality.
As the research paper explains, “3D copy & paste has to take stereopsis into account and avoid stereopsis rivalry: conﬂicting cues to the human visual system in the left and right eye images which could severely strain the visual system, or even destroy the 3D illusion altogether.”
The biggest challenge is preserving a sense of depth—and this is one of the major things the research team has accomplished. To preserve the volume of the element being pasted, the team developed what it calls a stereo billboard method for rendering the copied object. (Thisvideo provides more detail.)
Let’s say you’d taken some photos with one of the 3D cameras Fuji now sells. Each scene would consist of two shots: one red, one cyan (hence the red/blue glasses), from slightly different perspectives. The researchers’ software essentially computes the difference between the two versions to determine the depth between objects. The resulting depth map is used to guarantee that when the copied object is pasted into the composite image, it maintains the visual cues needed to make it appear realistic.
Which fruit was copied and pasted from different source images? Only the image editor knows (source: ACM Siggraph and the researchers).
The researchers have developed a complete editing system, including a precision tool for selecting an element of an image, similar to what you’d have with a program like Photoshop. They describe their software, which they introduced at the recent ACM Siggraph conference, as “intuitive” and responsive. Limitations to their current solution include vertical disparities and dealing with stereo baseline changes. They also point out that illumination differences between the copied image and the destination image are a “larger problem” that’s beyond the scope of this project.
The team hopes that someday these techniques can be applied to editing of stereoscopic 3D video.
Asked about any product plans, Wan-Yen Lo told Smarter Technology that this is “a research project, and commercialization is not planned yet.” But with an expected increase in 3D imagery—especially if the science fiction movies are right—there would be big demand for the capability she and her colleagues have developed. Once upon a time, the techniques embodied in Photoshop were research projects that no one planned to turn into a commercial product.
In the world of design mock-ups, where phones are seen with operating systems as-yet unavailable to them, the Nexus One can make video calls. Nope, this isn’t an internal hardware hack like we saw on the Vibrant; it’s a simple attachment in the form of an array of prisms and mirrors called OneMoreFace. We’ve already seen a few examples of this idea implemented for the older (pre-iPhone 4) iPhones, but this is probably the slickest design so far.
Right now it exists only in concept form, but the designer promises to add more info soon regarding availability for purchase. The piece looks small and sturdy enough to keep in your bag for whenever you feel an overwhelming urge to broadcast your mug to all and sundry. Should it eventually grace us with its presence here in the real world, it should make life without a Nexus S just that little bit easier to stomach.
Enemy pilots cannot discern immediately that what they are about to attack is not the real thing. The objects even appear authentic on thermal imagers.
They are all the size of the real items and when viewed by radar or satellite look like the actual things.
The Russian military deployed some of the models during the conflict with Georgia in 2008. Several Middle East countries including Iran have shown interest in the inflatables.
An inflatable T 80 tank costs about 6,000 dollars while the real item, which is out of production, cost 100 thousand to one million used.
More photos with fake military machines. But they look real from the air, don’t they?