7 months down the line and I’m still a big fan of my Samsung Galaxy S2. It’s been a great phone, combining large screen size with a slim, lightweight shell that I sometimes have to check for to remind myself that its still in my pocket. It’s surprisingly resilient as well, having taken more than a couple drops from pretty decent heights and coming out the other end with only minor scuffs and nary a scratch on the screen. Sadly I can’t say much more for the battery life as it seems that the more apps I pile on there the worse it gets, but I can’t really blame the phone for my app hoarding ways.
However I always knew that this relationship would be temporary, I mean how could it not? It started with geek wunderlust and as it is with all relationships that start like that it’s inevitable that my eyes would begin to wander, and so they have with this announcement:
…Ladies and gentlemen, here is the Samsung Galaxy S III:
- 1.5GHz quad-core Samsung Exynos processor
- 4.8-inch “full HD” 1080p resolution with 16:9 aspect ratio display
- A 2-megapixel front-facing camera and an 8-megapixel rear camera
- Ceramic case
- 4G LTE
- Android 4.0
I’ll spare you the photoshopped Galaxy S2 images that are doing the rounds but suffice to say those specs are pretty darn amazing. They’re also fairly plausible as well given Samsung’s research into the component technologies and current trends for both carriers and the Android platform. The detail that caught my eye however was the ceramic case as that’s not a material that you’d usually expect to see on a mobile phone with plastic and glass being the only 2 real choices. There could be reasoning behind it though and if my suspicions are correct its due to the crazy amount of tech they’ve stuffed under the hood.
Traditionally ceramics are pretty poor heat conductors which is why they make for good mugs and insulation materials. However there are quite a few advanced ceramics that are very capable of moving heat just as efficiently as most metals are, some even better. Now anyone who has a dual core smart phone knows how hot the buggers get when you’re using them for an extended period and since most phones are plastic that heat tends to stick around rather than dissipate. The ceramic case could then be an attempt to mitigate the heat problems that will come with the quad core processor and larger screen. This also has the potential to make the phones somewhat more brittle however (ceramics don’t flex, they shatter) so it will be interesting to see how Samsung compensates for that.
With just those few details though I’m already excited for Samsung’s next instalment in their flagship line of smart phones. Their last 2 iterations of the Galaxy S line have gone from strength to strength, firmly cementing themselves as the number one Android handset manufacturer. The Galaxy S3 looks to continue this trend with specifications that are sure to tempt even the most recent purchasers of the S2. I know I’ll find it hard to resist and I’m thankful that it probably won’t be out for a little while longer.
I don’t think my wallet would appreciate buying 2 phones within 7 months of each other
I’m not really sure I could call myself a fan boy of any technology or company any more. Sure there are there are some companies who’s products I really look forward to but if they do something completely out of line I won’t jump to their defense, instead choosing to openly criticize them in the hopes that they will get better. Still I like to make known which companies I may look upon with a rose tint just so that anyone reading these posts knows what they’re getting themselves into. One of these such companies is Sony who I’ve been a long time fan of but have still criticized them them when I’ve felt they’ve done me wrong.
Today I’ll be doing that once again.
As you’re probably already aware recently the Playstation Network (PSN), the online network that allows PS3 owners to play with each other and buy digital content, was compromised by an external entity. The attackers appear to have downloaded all account and credit card information stored on Sony’s servers prompting them to shut down the service for an unknown amount of time. The breach is of such a large scale that it has received extensive coverage in both online and traditional news outlets, raising questions about how such a breach could occur and what safeguards Sony actually has to prevent such an event occurring.
Initially there was little information as to what this breach actually entailed. Sony had chosen to shutdown the PSN to prevent any further breaches and left customers in the dark as to the reason for this happening. It took them a week to notify the general public that there had been a breach and another 4 days to contact customers directly. Details were still scant on the issue until Sony sent an open letter to Congress detailing their current level of knowledge on the breach. Part of the letter hinted that the hacktivist group Anonymous may have played a part in the breach as well but did not blame them directly for the breach. More details have made themselves public since then.
It has also recently come to light that the servers that Sony was using for the PSN were running out-dated versions of the popular Apache web server and lacked even the most rudimentary security provisions that you’d expect an online service to have. This information was also public knowledge several months before the breach occurred with posts on Sony’s forums detailing the PSN servers status. As a long time system administrator I find it extremely ludicrous that the servers were able to operate in such a fashion and I’m pretty sure I know where to lay the blame.
Whilst Anonymous aren’t behind this attack they may have unwittingly provided cover for part of the operation. Their planned DDoS on the PSN servers did go ahead and would’ve provided a timely distraction for any would be attacker looking to exploit the network. Realistically they wouldn’t have been able to get much of the data out at this point (or so I assume, Sony’s servers could have shrugged off the DDoS) but it would have given them ample opportunity to set up the system for the data dump in the second breach that occurred a few days later.
No the blame here lays squarely with those in charge, namely the PSN architects and executives. The reason I say this is simple, an engineer worth his salt wouldn’t allow servers to run unpatched without strict security procedures in place. To build something on the scale of the PSN requires at least a modicum of expertise so I can’t believe that they would build a system like that unless they were instructed to do so. I believe this stems from Sony’s belief that the PS3 was unhackable and as such could be trusted as a secure endpoint. Security 101 teaches you though that any client can’t be trusted with the data that it sends you however and this explains why Sony became so paranoid when even the most modest of hacks showed the potential for the PS3 to be exploited. In the end it was Sony’s superiority complex that did them in, pretending like their castle was impregnable.
The fallout from this incident will be long and wide reaching and Sony has a helluva lot of work to do if they’re going to fully recover from this damage. Whilst they’re doing the right thing in offering some restitution to everyone who was affected it will still take them a long time to rebuild all the good will that they’ve burned on this incident. Hopefully though this teaches them some valuable lessons on security and they’ll stop thinking they’re atop the impregnable ivory tower. In the end it will be worth it for Sony, if they choose to learn from their mistakes.
It was almost 4 months ago that I woke up in Orlando Florida, eagerly awaiting my trip to the fabled Kennedy Space Center and a day to be filled with all manner of space related fun. It was that same day that I had a dream torn from me, leaving my heart broken and me wanting to get as far away from that place as possible. Reading over the post today brought the whole day flooding back, along with the emotions that came with it. Still despite the pain of a dream not realized I couldn’t pull myself away from Twitter and the NASA TV stream, eagerly devouring each and every little detail of Discovery’s final launch into outer space.
And less than 30 minutes ago STS-133 launched from the Kennedy Space Center launch complex 39A.
Discovery’s final flight has been marred by a multitude of technical problems. The first 2 initial scrubs where due to leaks in the Orbital Maneuvering System which is used to control the space shuttle whilst its in orbit. The system consists of two pods at the rear of the orbiter that have a low thrust engine that uses hypergolic propellant and a leak in these would mean the shuttle would be unable to dock with the International Space Station. The leak was thought to be fixed and the launch was good to go on that faithful day, but Discovery wasn’t going without a fight.
The next launch window was scrubbed due to a problem with the backup main engine controller. Initial diagnostics showed that there was some transient contamination and that a reboot brought everything back into line. However after troubleshooting further, finding nothing wrong again, they did notice an unexpected voltage drop was observed. This lead them to delay the launch for 24 hours in order to find the issue. The next day was delayed due to weather and since I was there on the day I could see why they did. The final day for this launch window saw a hydrogen leak from the main tank that was outside acceptable mission limits, and the mission was scrubbed until today.
The external tank on Discovery had multiple issues. The first was the connector used to vent off excess hydrogen during fueling which was what caused the final delay before Discovery’s final launch. During the investigation into why there was such a substantial leak cracks were discovered in some of the external tanks insulation and upon further inspection it was found that many parts of the external tank had cracks through them. The construction of these particular parts of the external tank was different that from what was used previously and NASA has stated that this contributed to the cracking found in the external tank. Extensive repairs were carried out on the tank and it was only declared flight ready earlier this year. This meant that the turnaround time for Discovery was the longest of any shuttle bar STS-35 at 170 days.
What’s so special about STS-133 however is the sheer amount of payload it will be delivering to the ISS. The first will be the Permanent Multipurpose Module which is a modified version of one of the Multi-Purpose Logistics modules that have flown in many previous shuttle missions. Not only will this deliver almost 8 tons worth of cargo to the space station it will also add a significant amount of livable space to the ISS, rivaling that of the Kibo module. Many future crew missions are dedicated to configuring the PMM and it’s sure to prove valuable to the ISS.
Another interesting bit of cargo that’s making its way to the ISS is Robonaut2, the first humanoid robot ever to visit the station. The idea behind it is that a humanoid robot could be capable of performing many tasks that an astronaut does such as space station maintenance. Initially it will be housed inside the ISS and will undergo strict testing to see how it copes in the harsh environment of space. After a while its capabilities could be expanded and it might not be long before you see Robonaut working along side astronauts on EVAs. This could be quite a boon for the astronauts on the ISS as planning repairs can be quite time consuming and Robonaut could provide a speedy alternative in the event of an emergency.
The last, but certainly not least, bit of Discovery’s final payload is the SpaceX DragonEye sensor. This isn’t the first time that NASA has flown something for SpaceX, having taken the same sensor up on board STS-127 and STS-129, but it is likely to be the last time the sensor is flown before a real Dragon capsule attempts to use it to dock with the space station. The DragonEye sensor is an incredibly sophisticated bit of kit. It provides a 3D image based on LIDAR readings and can determine range and bearing information. The whole system went from concept to implementation in just on 10 months, showing the skill that the SpaceX guys have went it comes to getting things done.
To be honest I was going to put off doing this post for a couple days just because I didn’t want to think about STS-133 anymore than I needed to. But the second I saw that the NASA TV steam was up I couldn’t help but be glued to it for the entire time it was up. Sure I might not be there to see it in person but I’ve finally remembered why I became so enamored with space in the first place: it’s just so damned exciting and inspiring. I may have had my heart broken in the past but when a simple video stream of something I’ve seen dozens of times in the past can erase all that hurt I know that I’m a space nut at heart and I’ll keep coming back to it no matter what.
The last thing you want as a developer is your code to go out into the wild before its ready. When that happens people start to build expectations on a product that’s not yet complete and will form assumptions that, for better or worse, don’t align with the vision you had so carefully constructed. Most often this happens as a result of management pressure and there’s been many a time in my career where I’ve seen systems moved up into production long before they’re ready for prime time. However the damage done there pales in comparison to that can be done to a game that’s released before its ready and I’m almost ashamed to admit that I’ve delved into this dark world of game leaks before.
The key word there is, of course, almost.
I remember my first steps into this world quite well. It was late 2002 and news began to make the rounds that someone had leaked an early alpha build of Doom 3, the next installment in the series in almost a decade. I was incredibly intrigued and began my search for the ill-gotten booty scouring the vast recesses of e-Donkey and Direct Connect, looking for someone who had the magical files. Not long after I was downloading the 380MB file over my dial up connection and I sat back whilst I waited for it to come down.
After it finished downloading I unzipped the package and waited whilst the crazy compression program they had used did its work, feverishly reassembling the code so that I could play it. This took almost an hour and the eventual result was close to double the size of the file I downloaded, something I was quite thankful for. After a few tension filled seconds of staring at the screen I double clicked the executable and I was greeted with the not yet released version of Doom 3. The game ran extremely poorly on my little box but even then I was awe struck, soaking up every second until it crashed on me. Satisfied I sank back into my chair and hopped onto Trillian to talk to my friends about what I had just seen.
It wasn’t long until I jumped back into this world again. Just under a year later rumors started to make the rounds that none other than Valve had been subjected to a sophisticated attack and the current version of Half Life 2 copied. The gaming community’s reaction was mixed as we had been promised that the game was ready to be released this year but as far as everyone could tell the current build was no where near ready. Instead of jumping straight in this time however I sat back and considered my position. Whilst I was extremely eager to see Valve’s latest offering I had seen the damage that had been done with Doom 3’s premature release and my respect for Valve gave me much trepidation when considering taking the plunge once again. Seeing the files on someone’s computer at a LAN I couldn’t let the opportunity go by and I snagged myself a copy.
The game I played back then, whilst by no means a full game, still left a long lasting impression on me. The graphics and environments were beautiful and the only level I got to work properly (I believe it was the beach level) was made all the more fun by the inclusion of the makeshift jeep. I couldn’t bring myself to play it for long though as whilst I knew that the code leak wasn’t the sole reason Valve delayed Half Life 2 I knew it wasn’t going to bring the game to me any faster. This time around I deleted my copy of the leaked game and waited patiently for its final release.
Most recently it came to my attention that the Crysis 2 source, which apparently includes the full game and a whole host of other goodies, made its way on most popular BitTorrent sites. This time around however I haven’t even bothered to go and download the game, even just for curiosity’s sake. There’s less than a month to go until the official release and really I’d rather wait that long to play it legitimately than diving back into that dark world I had left behind so long ago. The temptation was definitely there though, especially considering how much fun I had in the original Crysis, but a month isn’t a long time to wait especially with the other games I’ve got on my current backlog.
If there’s one common theme I’ve seen when these leaks come out it’s the passion that the community has for these game development companies and their flagship titles. Sure its misplaced but the fever pitch that was reached in each of these leaks shows just how much people care about these games. Whilst it might damage the project initially many of them go on to be quite successful, as both Half Life 2 and Doom 3 did. Crysis 2 should be no different but I can still understand the heartache that those developers must be going through, I don’t know what I’d do if someone nicked off with the source code to Lobaco.
Will I ever download a leaked copy of a game before it’s release? I can’t be sure in all honesty. Although I tend to avoid the hype these days I still do get really excited when I hear about some titles (Deus Ex: Human Revolution for example) and that could easily overwhelm my sensibility circuits forcing me to download the game. I do make good on purchasing the games when they’re released however and since I’m a bit of a collector’s edition nut I believe I’ve paid my penance for delving into the darker side of the gaming world. I can completely understand if game developers don’t see eye to eye with me on this issue but I hope they recognize passion, however misplaced, when they see it.
In a very eerie concidence with my post yesterday about being an early adopter of Sony’s technology it seems that there’s been a “leak” of the new kid on the block, the PSP Go. There’s a couple places talking about it and here’s what they have to say:
Look up there, folks. That’s the future of Sony’s hopes and dreams in the handheld gaming sector. With just hours to go before the company’s official E3 2009 press event, it looks like the pieces are all coming together. First a UMD-less game release, then a highly credible mole giving the PSP Go a name, and now — live action shots. The images here were sourced from an obviously slipped June 2009 Qore video, and aside from giving us a look at the slider-based system (which, let’s be honest, looks a ton like the questionably successful mylo), we’re also told that it’ll tout 16GB of internal memory, built-in Bluetooth and an undisclosed memory slot. If all goes well, it’ll ship this Fall for a price to be determined, and it’s actually not slated to replace the PSP-3000, as both of ‘em will attempt to live on store shelves harmoniously… at least for awhile. Oh, and don’t worry — we’ll be on hand in LA to bring you all the impressions we can muster early next week.
[Via PlayStation Forums, thanks Matt and A1]
Update: Video is now after the break! Thanks adizzy615!
Update 3: A few more official specifications are flowing from the full Qore video (pardon the sync issues). Here’s the dirt:
- 3.8-inch display (resolution is undisclosed)
- 43 percent lighter than the PSP-3000
- 16GB of Flash storage
- Bluetooth built-in; supports handset tethering and BT headsets
- No UMD drive
- Memory Stick Micro slot
- New Gran Turismo, Little Big Planet and new Metal Gear Solid (!) on the way
- Full PlayStation Network support (movie and TV rentals / purchases)
- Integration with PlayStation 3 (works the same as the PSP-3000 does)
- Sony views each of its products as “10-year lifecycle products,” so the PSP “needs to live on.”
Probably one of the most interesting developments is Sony’s dropping of the UMD format from the new device in favour of large internal storage. Now whilst I’m a dribbling moronic fan boy when it comes to all things Sony I can’t say I was too impressed when they decided to release yet another media format for their handheld console. I mean sure I can understand the memory cards (at least I can use them in other Sony stuff) but the UMD was just another format that didn’t need to exist and of course one of the most popular homebrew apps are ISO loaders. Granted there will be a majority of users out there who use that to pirate games but anyone can tell you that loading your game onto your memory stick improves battery life, reduces load time and saves you the hassle of carrying around those annoying discs. I’m glad Sony has wised up on that one.
With the removal of the drive and the addition of Bluetooth and a larger display it does make this upgraded PSP a pretty attractive purchase to someone like me. When the PSP slim came out I didn’t buy one because it was pretty debatable how much I would gain from going to such a device. This one on the other hand has things I can’t get in my current PSP (now almost 4 years old as well) so it’s pretty much guaranteed I’ll get one. Hopefully they do some awesome integration with it so that I can use it in certain games, but that’s up to the developers of course.
Now onto the leak itself. Colour me sceptical but whilst everyone is saying this was a slip up I can’t really see it that way. There’s always a lot of buzz and hype when it comes that time every year for E3 (even though it’s had a severe decline in the recent years due to it’s “reorganisation”) and the easiest way to make sure you’re talked about at the show is to release some details early so that the press will want to report on that to do the old confirm/deny/opinion piece. The amount of information that we’ve been fed by this leak is pretty substantial so there’s not going to be any rush to report on Sony’s next big handheld so what we’ll see instead will be the hands-on reports and possibly some videos of it running. I can’t help but feel that this was an attempted leak gone a little too far, killing some of the buzz that it would have seen at E3.
But then again I’m a cynical person when it comes to these things. Until I see the big corporations huffing and puffing and trying to blow the houses down of the people who leaked the photos/specs/videos I don’t believe that it really was a leak. Sony confirmed all the details shortly after the leak as well, raising my eyebrow even further.
Still it looks like a good evolution of the PSP handheld and with mine tickling the 4 year mark you can bet that I’ll be looking to get one of these to have a fiddle with in the future.