Posts Tagged‘camera’

The Light-L16 Isn’t “DSLR Quality”.

It’s well known that the camera industry has been struggling for some time and the reason for that is simple: smartphones. There used to be a wide gap in quality between smartphones and dedicated cameras however that gap has closed significantly over the past couple years. Now the market segment that used to be dominated by a myriad of pocket cameras has all but evaporated. This has left something of a gap that some smaller companies have tried to fill like Lytro did with their quirky lightfield cameras. Light is the next company to attempt to revitalize the pocket camera market, albeit in a way (and at a price point) that’s likely to fall as flat as Lytro’s Illum did.

L16-FRONT

The Light-L16 is going to be their debut device, a pocket camera that contains no less than 16 independent camera modules scattered about its face. For any one picture up to 10 of these cameras can fire at once and, using their “computational photography” algorithms the L-16 can produce images of up to 52MP. On the back there’s a large touchscreen that’s powered by a custom version of Android M, allowing you to view and manipulate your photos with the full power of a Snapdragon 820 chip. All of this can be had for $1299 if you preorder soon or $1699 when it finally goes into full production. It sounds impressive, and indeed some of the images look great, however it’s not going to be DSLR quality, no matter how many camera modules they cram into it.

You see those modules they’re using are pulled from smartphones which means they share the same limitations. The sensors themselves are going to be tiny, around 1/10th the size of most DSLR cameras and half again smaller than full frames. The pixels on these sensors then are much smaller, meaning they capture less detail and perform worse in low light than DSLRs do. You can overcome some of these limitations through multiple image captures, like the L-16 is capable of, however that’s not going to give you the full 52MP that they claim due to computational losses. There are some neat tricks they can pull like adjusting the focus point (ala Lytro) after the photo is taken but as we’ve seen that’s not a killer feature for cameras to have.

Those modules are also arranged in a rather peculiar way, and I’m not talking about the way they’re laid out on the device. There’s 5 x 35mm, 5 x 70mm and 6 x 150mm. This is fine in and of itself however they can’t claim true optical zoom over that range as there’s no graduations between all those modules. Sure you can interpolate using the different lenses but that’s just a fancy way of saying digital zoom without the negative connotations that come with it. The hard fact of the matter is that you can’t have prime lenses and act like you have zooms at the same time, they’re just physically not the same thing.

Worst of all is the price which is already way above entry level DSLRs even if you purchase them new with a couple lenses. Sure I can understand form factor is a deal breaker here however this camera is over double the thickness of current smartphones. Add that to the fact that it’s a separate device and I don’t think people who are currently satisfied with their smartphones are going to pick one up just because. Just like the Lytro before it the L-16 is going to struggle to find a market outside of a tiny niche of camera tech enthusiasts, especially at the full retail price.

This may just sound like the rantings of a DSLR purist who likes nothing else, and in part it is, however I’m fine with experimental technology like this as long as it doesn’t make claims that don’t line up with reality. DSLRs are a step above other cameras in numerous regards mostly for the control they give you over how the image is crafted. Smartphones do what they do well and are by far the best platform for those who use them exclusively. The L-16 however is a halfway point between them, it will provide much better pictures than any smartphone but it will fall short of DSLRs. Thinking any differently means ignoring the fundamental differences that separates DSLRs and smartphone cameras, something which I simply can’t do.

Nokia Resurfaces as a…Virtual Reality Video Startup?

Nokia was once the king of the phones that everyone wanted. For many it was because they made a solid handset that did what it needed to do: make calls and send text messages. Their demise came at their inability to adapt to the rapid pace of innovation that was spurred on by Apple and Google, their offerings in the smartphone space coming too late, their customers leaving for greener pastures. The result was that their handset manufacturing capability was offloaded to Microsoft but a small part of Nokia remained independent, one that held all the patents and their research and development arm. It seems that that part of Nokia is looking to take it in crazy new directions with their first product being the Ozo, a 360 degree virtual reality video camera.

ozo-press-photo-nature

Whilst Nokia isn’t flooding the newswaves with details just yet we do know that the Ozo is a small spherical device that incorporates 8 cameras and microphones that are able to capture video and sound from any angle. It’s most certainly not the first camera of its kind with numerous competitors already having products available in this space but it is most certainly one of the better looking offerings out there. As for how it’d fare against its competition that’s something we’ll have to wait to see as the first peek at the Ozo video is slated to come out just over a week from now.

At the same time Nokia has taken to the Tongal platform, a website that allows brands like Nokia to coax filmmakers into doing stuff for them, to garner proposals for videos that will demonstrate the “awesomeness” of the Ozo platform. To entice people to participate there’s a total of $42,000 and free Ozo cameras up for grabs for two lucky filmmakers, something which is sure to attract a few to the platform. Whether that’s enough to make them the platform of choice for VR filmmakers though is another question, one I’m not entirely sure that Nokia will like the answer to.

You see whilst VR video has taken off of late due to YouTube’s support of the technology it’s really just a curiosity at this point. The current technology strictly prohibits it from making its way into cinemas, due to the fact that you’d need to strap an Oculus Rift or equivalent to your head to experience it. Thus it’s currently limited in appeal to tech demos, 3D renderings and a smattering of indie things. Thus the market for such a device seems pretty small, especially when you consider there’s already a few players selling their products in this space. So whilst Nokia’s latest device may be a refreshing change for the once king of phones I’m not sure it’ll become much more than a hobby for the company.

Maybe that’s all Nokia is looking for here, throwing a wild idea out to the public to see what they’d make of it. Nokia wasn’t exactly known for its innovation once the smartphone revolution began but perhaps they’re looking to change that perception with the Ozo. I’m not entirely convinced it will work out for them, anyone can throw together a slick website with great press shots, but the reaction from the wider press seems to indicate that they’re excited about the potential this might bring.

Watching Photons Fly.

You can see light’s presence everywhere, but have you ever seen it moving? Due to the speed of light being the fastest thing we currently know of it’s a rather elusive beast to see in motion, especially on the scale we exist in, and whilst it might look instantaneous it does have a finite speed. Whilst we’ve done many experiments in slowing light down and even trapping it for short periods of time but being able to watch a light ray propagate was out of our reach for quite some time, that was until the recent development of a couple technologies.

The above video is the work of Ramesh Raskar and his team at MIT which produced a camera that’s capable of capturing 1 trillion frames per second. However it’s not a camera in the traditional sense as the way it captures images is really unique, not at all like your traditional camera. Most cameras these days are CCD based and capture an image of the whole scene then read it off line by line and store it for later viewing. The MIT system makes use of a streak camera which is only capable of capturing a line a single pixel high, essentially producing a one dimensional image. The trick here is that they’re taking a picture of a static scene and doing it multiple times over, repositioning the capture area each time in order to build up an image of the scene. As you can imagine this takes a considerable amount of time and whilst there are some incredible images/movies created as a result the conditions and equipment required to do so aren’t exactly commodity.

There are alternatives however as some intrepid hackers have demonstrated.

Instead of using the extremely expensive streak camera and titanium sapphire laser their system instead utilizes a time of flight camera coupled with a modulated light source. From reading their SIGGRAPH submission it appears that their system captures an image of the whole scene and so to create the light flight movies they change when the light source fires and when the camera takes the picture. This process allows them to capture a movie much quicker than MIT’s solution and with hardware that is a fraction of the cost. The resolution of the system appears to be lower, I.E I can’t make out light wave propagation like you can in the MIT video, but for a solution that’s less than 1% of the cost I can’t say I fault them.

Their paper also states they’re being somewhat cautious with their hardware, running it at only 1% of its duty cycle currently. The reason for this is a lack of active cooling on their key components and they didn’t want to stress them too much. With the addition of some active cooling, which could be done for a very small cost, they believe they could significantly ramp up the duty cycle, dropping the capture time down to a couple seconds. That’s really impressive and I’m sure there’s even more optimizations that could be made to improve the other aspects of their system.

It’s one thing to see a major scientific breakthrough come from a major research lab but it’s incredible to see the same experiments reproduced for a fraction of the cost by others. Whilst this won’t be leading to anything for the general public anytime soon it does open up paths for some really intriguing research, especially when the cost can be brought down to such a low level. It’s things like this that keep me so interested and excited about all the research that’s being done around the world and what the future holds for us all.

My Stance on Instagram Explained.

Ho boy, rarely have I copped more flak for a post both online and offline than my piece early last year on how the general population of Instagram made me feel. In all honesty whilst I knew there were a few people it would piss off, which was one of the reasons it sat in my drafts folder for ages, I still felt like I had some valid points to make based on my observations based around the Instagram user base at large. Many people took offence to this, arguing points ranging from “Why should that matter to you anyway?” to “You’re using it wrong, there’s great communities on there”. I was hoping that the comments section would have been the end of all of it but late last week the topic came up again and I lost an hour in the ensuing debate so I figured it was time I made my position on this whole matter more clear.

FR0001

I recognise that for every example I can dredge up of someone posting a horribly framed and filtered picture of their breakfast someone else can just as easily show me something like this. My criticism wasn’t levelled at people who use the service in this fashion but reading back over the post and the ensuing comments I never really made that entirely clear, so mea culpa on that one. However I don’t feel that the general thrust of my argument has been invalidated by that as many users agree that the vast majority of stuff on Instagram isn’t particularly great. This isn’t unique to Instagram however as any user generated content site suffers from Sturgeon’s Law and honestly the mentality of users on said sites really doesn’t vary that much but Instagram hit closer to home thanks to my interest in this particular area.

I’ve also had people try to bring me back into the Instagram fold in order to convince me that there’s something in the platform for me. Now whilst I wasn’t an active user for quite some time I did have the application installed on my Galaxy S2 for the better part of the year, mostly so I could view pictures linked to me on Twitter without having to use Instagram’s then rather shitty web interface. From time to time I’d look at pictures on there and see some genuinely good ones but not often enough to convince me that it was worth investing my time to better my feed by subscribing to said users. The fact of the matter is I already have many other avenues for discovering photographers that I like, ones that share a critical characteristic with.

Our preferred platform of choice.

For me the undisputed platform of choice is my DSLR. I’ve tried many other camera systems from high end point and shoots, film SLRs and yes multitudes of cameras in phones but in the end I always end up coming back to my DSLR. The reasoning behind this is because of the amount of control and influence I have over the final image, something which I struggle with on any other platform. It may sound weird if you prefer the simplicity that’s granted to you by camera phones (something which I do understand) but I find it a lot easier to take pictures on my DSLR, to the point where using anything else just frustrates me. I think that’s because I know that whilst I can do a lot of things in post should I so desire there are some things I simply can’t unless I’m using my preferred platform of choice.

This is somewhat at odds with the Instagram community which, as far as I’m aware, doesn’t take particularly kindly to those who take photos outside of their phone and then upload them via the service. If I was going to use Instagram again that’s the way I would use it but I’d rather not antagonize the community further by breaking the current social norm on there. For now I really only use Facebook to distribute pictures (mostly because my recent photographic endeavours have involved friend’s weddings) but I’ve been a fan of Flickr and 500px for a long time now as they seem to be more my kind of people.

I’ve come to realise that even my beloved DSLR community isn’t immune to this kind of malarkey either as there are far, far too many people who I walking around with a $1000+ camera with the shocking kit lens on it shooting in auto thinking that they’re the next Don McCullin. The criticisms I’ve levelled at Instagram apply to them as well although they’ve yet to congregate onto a platform that’s as ubiquitous as Instagram has become.

After the backlash I received I set myself a challenge to try and use my camera phone to produce pictures that I’d be proud to share and the above is probably one of the dozens I’ve taken that’s anywhere near what I wanted it to be. 6 months of trying have shown me there’s definitely a lot of effort required into creating good pictures, arguably the same amount as required by using a DSLR, but I still feel like I’m constrained by my phone. Maybe that’s a personal thing, something that I could overcome with more time and dedication, but in saying that I’d propose the same thing to all the Instagrammers out there. Borrow a friends DSLR and see the world from our side. Maybe you’ll come away with an appreciation for the technology that helped give birth to the platform you so love today.

Blue Marble 2012.

There’s a couple iconic photographs from space that everyone is familiar with. The most recognizable is probably this one I used a couple years ago during the 40th anniversary celebration of the Apollo missions showing Buzz Aldrin standing on the dusty surface of the moon. A few other notables are ones like Earthrise, The Pale Blue Dot and the STS-1 mission liftoff  (note the white external fuel tank, one of only 2 to have it) but above them all stands the Blue Marble, an incredibly breath taking view of our earth as seen by the Apollo 17 crew on their mission to the moon.

It’s a beautiful photo and one that changed my, and certainly many others, view of the world. I don’t know why I used to think this but before seeing this picture I imagined the world being mostly cloudless, not covered in the swaths of thick cloud that you see in the picture above. It also puts your entire life in perspective, much like the Pale Blue Dot does, knowing that in the end we’re all clinging to this giant water covered rock shooting through space.

Over the years NASA has set about recreating the Blue Marble as technology progressed, mostly just as an aside to one of their many Earth sensing programs. The big difference between the original and these subsequent releases is that the newer ones are composite images, I.E. they’re not a single photograph. You can see this quite clearly in the 2005 version which shows how the Earth would look like if there was no cloud cover, something that’s simply impossible to photograph. The most recent addition to this lineage of whole Earth pictures is the Blue Marble 2012 and it’s quite spectacular:

The original picture is some 8000 x 8000 pixels large (64 megapixels) and gives you an incredible amount of detail. The resolution is high enough for you to be able to pick out topographical details with relative ease and you can even see the shadows that some of the clouds are casting on the ground below them. The original article that was linked to me had a lot of interesting comments (a lot on how the Americas appear to be somewhat distorted) but one that caught my attention was a question about one of the differences between the two pictures.

Why, they asked, is there no thin blue halo in the original picture?

The halo they were referring to is clearly visible if you view the larger version of the new Blue Marble picture and seems distinctly absent in the original. The planet hasn’t radically changed (geologically, at least) in the time between the pictures so the question is a curious one. To figure this out we have to understand the differences in how both these images came to be and in there is where our answer lies.

The original Blue Marble was taken by a single 70mm Hasselblad camera with a 80mm lens at a distance of approximately 45,000KM away from the Earth. The newer version is a composite reconstruction from several images taken by the Suomi NPP satellite which orbits at around 500KM above the Earth’s surface. Disregarding the imaging technology used and the reconstruction techniques on the modern version it becomes apparent that there’s a massive difference in the distance that these pictures were taken. Looking at the halo you’ll notice that it’s quite small in comparison to the size of the Earth so as your distance from Earth increases the smaller that halo will appear. So for the original Blue Marble the halo is pretty much invisible because the resolution of the camera is insufficient to capture it. The newer picture, being much closer and having a higher effective resolution, is able to capture it.

These kinds of images are always fascinating to me, not just for their beauty but also for the story behind what went into creating them. The number of man hours that went into creating something like this that appears so simple is staggering and demonstrates that we, as a species, are capable of great things if we put our minds to it. The Blue Marble 2012 might not become the icon that its predecessor was but its still an awe inspiring image to look at and even more interesting one to contemplate.

Lytro: Light Field Technology Becomes a Reality.

One of my not-so-secret passions is photography. I got into it about 5 years ago when I was heading over to New Zealand with my then girlfriend (now wife) as I wanted a proper camera, something that could capture some decent shots. Of course I got caught up in the technology of it all and for the next year or so I spent many waking hours putting together my dream kit of camera bodies, lenses and various accessories that I wanted to buy. My fiscal prudence stopped me short of splurging much, I only lashed out once for a new lens, but the passion has remained even if it’s taken a back seat to my other ambitions.

According to posts made by Boudoir photographers in CT, one of the greatest challenges is getting the focus just right so that your subject is clear and the other details fade into the background, preferably with a nice bokeh. I struggled with this very problem recently when we threw a surprise party for my wife and one of her dearest friend’s birthdays. Try as I might to get the depth of field right on some of the preparations we were doing (like the Super Mario styled cupcakes) I just couldn’t get it 100% right, at least not without the help of some post production. You can imagine then how excited I was when I heard about light field technology and what it could mean for photography.

In essence a light field camera would give you the ability to change the focus, almost infinitely, after the picture had been taken. It can do this as it doesn’t capture light in the same way that most cameras do. Instead of taking one picture through one lens light field cameras instead capture thousands of individual rays of light and the direction from which they were coming. Afterwards you can use this data to focus the picture wherever you want and even produce 3D images. Even though auto-focus has done a pretty good job of eliminating the need to hand focus shots the ability to refocus after the fact is a far more powerful advancement, one that could revolutionize the photography industry.

I first heard about it when Lytro, a light field based startup, mentioned that they were developing the technology back in June. At the time I was thinking that they’d end up being a manufacturer or licensor of their technology, selling their sensors to the likes of Canon and Nikon. However they’d stated that they were going to make a camera first before pursuing that route and I figured that meant we wouldn’t see anything from them for at least another year or two. I was quite surprised to learn that they have their cameras up for pre-order and delivery is expected early next year.

As a camera it defies current norms almost completely. It’s a square cylinder with an LCD screen on the back and the capture button is a capacitive notch on the top. From that design I’d assume you’d take pictures with it by using it like a ye olde telescope which will be rather comical to watch. There’s 2 models available, an 8GB and 16GB one, that can hold 350 and 750 pictures respectively. The effective resolution that you get out of the Lytro camera seems to be about 1MP but the images are roughly 20MB big. The models come in at $399 and $499 respectively which, on the surface, seems a bit rich for something that does nothing but take really small photos.

However I think Lytro is going the right way with this technology, much like Tesla did when they first released the Roadster. In essence the Lytro camera is a market test as $400 is almost nothing compared to the amount of money a photography enthusiast will spend on a piece of kit (heck I spent about that much on the single lens I bought). Many then will be bought as a curiosity and that will give Lytro enough traction to continue developing their light field technology, hopefully one day releasing a sensor for the DSLR market. From the amount of buzz I’ve read about them over the past few days it seems like that is a very real possibility and I’d be one of the teaming masses lining up to get a DSLR with that kind of capability.

They’re not the only light field camera maker out there either, heck they’re not even the first. Raytrix, a 3D camera manufacturing company, was actually the first to market with a camera that incorporated light field technology. Looking over their product range they’ve got quite the selection of cameras available for purchase although they seem to be aimed more at the professional rather than consumer market. They even offer to convert your favourite camera into a light field one and even give you some rough specs of what your camera will be post conversion. Lytro certainly has its work cut out for them with a company like Raytrix competing against them and it’ll be interesting to see how that develops.

On a personal level this kind of technology gets me all kinds of excited. I think that’s because they’re so unexpected, I mean once auto-focus made it easy for anyone to take a picture you’d think that it was a solved problem space. But no, people find ingenious ways of using good old fashioned science to come up with solutions to problems we thought were already solved. The light field space is really going to heat up over the next couple years and it’s got my inner photographer rattling his cage, eager to play with the latest and greatest. I’m damned tempted to give into him as well as this tech is just so freakin’ cool.

The Spy Satellite HEXAGON: Ah, Now The Shuttle’s Design Makes Sense.

Whilst the Space Shuttle will always be one of the most iconic spacecraft that humanity has created it’s design was one of compromises and competing objectives. One of the design features, which influenced nearly every characteristic of the Shuttle, was the requirement from the Department of Defense that stipulated that the Shuttle needed to be able to launch into a polar orbit and return after a single trip around the earth. This is the primary reason for the Shuttle being so aeroplane like in its design, requiring those large wings so it has a long downrange capability so that it could return to its launch site after that single orbit. The Shuttle never flew such a mission, but now I know why the DoD required this capability.

It was speculated that that particular requirement was spawned out of a need to capture spy satellites, both their own and possibly enemy reconnaissance craft. At the time digital photography was still very much in its infancy and high resolution imagery was still film based so any satellite based spying would be carrying film on board. The Shuttle then could easily serve as the retrieval vehicle for the spy craft as well as functioning as a counter intelligence device. It never flew a mission like this for a couple reasons, mostly that a Shuttle launch was far more expensive than simply deorbiting a satellite and sending another one up there. There was also the rumour that Russia had started arming its spacecraft and sending humans up there to retrieve them would be an unnecessary risk.

The Shuttle’s payload bay was also quite massive in comparison to the spy satellites of the time which put further into question the DoD’s requirements. It seems however that a recently declassified spy satellite, called HEXAGON, was actually the perfect fit and could have influenced the Shuttle’s design:

CHANTILLY, Va. – Twenty-five years after their top-secret, Cold War-era missions ended, two clandestine American satellite programs were declassified Saturday (Sept. 17) with the unveiling of three of the United States’ most closely guarded assets: the KH-7 GAMBIT, the KH-8 GAMBIT 3 and the KH-9 HEXAGON spy satellites.

“I see a lot of Hubble heritage in this spacecraft, most notably in terms of spacecraft size,” Landis said. “Once the space shuttle design was settled upon, the design of Hubble — at the time it was called the Large Space Telescope — was set upon. I can imagine that there may have been a convergence or confluence of the designs. The Hubble’s primary mirror is 2.4 meters [7.9 feet] in diameter and the spacecraft is 14 feet in diameter. Both vehicles (KH-9 and Hubble) would fit into the shuttle’s cargo bay lengthwise, the KH-9 being longer than Hubble [60 feet]; both would also fit on a Titan-class launch vehicle.”

HEXAGON is an amazing piece of cold war era technology. It was equipped with two medium format cameras that would sweep back and forth to image an area, capturing an area 370 nautical miles wide. Each HEXAGON satellite carried with it some 60 miles worth of film in 4 separate film buckets which would detach from the craft when used and return to earth where they would be snagged by a capture craft. They were hardy little canisters too with one of them ending up on the bottom of an ocean but was retrieved by one of the navy’s Deep Submergence Vehicles. There were around 20 launches of the HEXAGON series of craft with only a single failure towards the end of the program.

What really surprised me about HEXAGON though was the resolution they were able to achieve some 30+ years ago. HEXAGON’s resolution was improved throughout its lifetime but later missions had a resolution of some 60cm, more than enough to make out people and very detailed images of say cars and other craft. For comparison GeoEye-1, which had the highest resolution camera on an earth imaging craft at the time of launch, is only just capable of a 40cm per pixel resolution (and that imagery is property of the USA government). Taking that into consideration I’m wondering what kind of imaging satellite the USA is using now, considering that the DoD appears to be a couple decades ahead of the commercial curve.

It’s always interesting when pieces of a larger puzzle like the Shuttle’s design start falling into place. Whilst it’s debatable whether or not HEXAGON (and it’s sister craft) were a direct influence on the Shuttle there’s enough coincidences to give the theory a bit of credence. I can see why the USA kept HEXAGON a secret for so long, that kind of capability would’ve been down right scary back in the 80’s and its reveal makes you wonder what they’re flying now. It’s stuff like this that keeps me obsessed about space and what we, as a species, are capable of.

Apple’s iPad 2: Eh, Nothing Surprising.

So here we are 1 year and 1 month after the initial release of the iPad and Apple has, to no one’s surprise, release the newest version of their product the iPad 2. As anyone who knows me will tell you there’s no love lost between me and Apple’s “magical” device that filled a need where there wasn’t one but I can’t argue that it’s been quite successful for Apple and they arguably brought tablets into the mainstream. Still Apple has a habit of coming late to the party with features that have been part and parcel of competing products and the iPad 2 is no exception to this rule.

The iPad 2 is mostly an incremental hardware upgrade to the original iPad as the technical specifications reflect (cellular model specs shown):

  • Wi-Fi + 3G model: UMTS/HSDPA/HSUPA (850, 900, 1900, 2100 MHz); GSM/EDGE (850, 900, 1800, 1900 MHz)
  • Wi-Fi + 3G for Verizon model: CDMA EV-DO Rev. A (800, 1900 MHz)
  • Wi-Fi (802.11a/b/g/n)
  • Bluetooth 2.1 + EDR technology
  • 9.7-inch (diagonal) LED-backlit glossy widescreen Multi-Touch display with IPS technology
  • 1024-by-768-pixel resolution at 132 pixels per inch (ppi)
  • 1GHz dual-core Apple A5 custom-designed, high-performance, low-power system-on-a-chip
  • Back camera: Video recording, HD (720p) up to 30 frames per second with audio; still camera with 5x digital zoom
  • Front camera: Video recording, VGA up to 30 frames per second with audio; VGA-quality still camera
  • Built-in 25-watt-hour rechargeable lithium-polymer battery
  • Up to 10 hours of surfing the web on Wi-Fi, watching video, or listening to music
  • Up to 9 hours of surfing the web using 3G data network
  • Three-axis gyro
  • Accelerometer
  • Ambient light sensor
  • Wi-Fi
  • Digital compass
  • Assisted GPS

Most notably the iPad 2 is 33% thinner and 15% lighter than its predecessor. To put that in perspective that makes the iPad 2 thinner than the iPhone 4, which is pretty damn slim to begin with. Additionally the iPad 2 comes with a dual core A5 processor (not to be confused with the ARM Cortex A5) as well as front and rear cameras. Rumoured features of a Retina-esque type display for the iPad 2 were just that it seems with this device retaining the same screen as its predecessor. Additionally although Apple is going to be offering the iPad 2 on the Verizon network it will not be capable of accessing their 4G LTE network unlike other tablets like the Motorola Xoom.

In addition to announcing the iPad 2 Apple also announced the upcoming update to iOS, version 4.3. Amongst most of the rudimentary things like updates to AirPlay and Safari Apple is also enabling all 3GS handsets and above the ability to create a wireless hotspot using the 3G connection on the phone. Tethering has been available via bluetooth and USB cable for a long time now but if you wanted the hotspot functionality you were relegated to the world of jailbreaking so its good to see Apple including it in an official release. There’s also iTunes home sharing which allows you to view your entire iTunes library without having to sync it all to your phone which I can see being handy but not really a killer feature.

Like the vast majority of Apple products many of the features that they are releasing today have been available from competitors for a long time before hand. Wireless tethering has been around for quite a long time, hell I even had it on my Xperia X1, so it makes me wonder why Apple omits features like this when they’re so rudimentary. The same can be said for the original iPad being bereft of cameras as many who saw the device instantly recognized its potential for being a great video conferencing device. In all honesty I believe that the lack of cutting edge features on most Apple products is not simply because they want to make everything perfect, more its about keeping enough features up their sleeves in order to be able to release a new iteration of their iDevices every year. If they included everything they could for the get go their scope for future upgrades narrows considerably, along with their potential profit margins.

It should really come as no surprise then that the iPad 2 doesn’t come with a Near Field Communications chip in it. Now no one was really expecting that, all the rumors point to the iPhone 5 being the first Apple product to have it, but Apple could have had a huge advantage in driving the technology had they have included it in their latest offering. Heck I’d probably even be lining up to grab one if it had NFC in it just because I’ve got a couple start up ideas that need a NFC tablet and phone but I guess that will have to wait until the next generation, if that.

Apple has also redesigned the cover that they’ll be selling alongside the iPad 2. The original one, which drew the ire of some Apple fan boys, was a more traditional case in the sense that it covered up the entire iPad. The new case is more of a elaborate screen protector but it has some novel uses thanks to its sectioned design letting you prop up the iPad in landscape mode. The cover also makes use of the new proximity sensor on the iPad 2, turning off the screen when you close the cover.

Honestly the iPad 2 is everything we’ve come to expect from Apple, an incremental improvement to one of their now core products. Even though I’m starting to come around to the tablet idea (I don’t what it is but the Xoom just tickles my fancy) Apple’s offerings are just never up to scratch with the competition, especially considering how good Android Honeycomb is looking. Still it will be interesting to see how the first hardware refresh of the iPad fares as that will be telling of how large the tablet market is and whether Apple can continue to hold dominance in the space they helped to bring into the mainstream.