XML Goo-i-ness Inside

Microsoft pre-released their XAML-in-the-browser technology, WPF/e earlier this week. XAML inside.

XAML “smells” like the W3C’s Scalable Vector Graphics (SVG). DOM-inside-a-DOM, Declarative animation, 2D graphics. XAML maybe not SVG, but it certainly tips its hat to SVG.

Adobe today pre-released their XML-in-a-PDF technology, Mars, for Acrobat 8. Essentially, Mars as a technology is presently delivered as plugins for Adobe Reader 8 and Acrobat 8 Professional. You can save an existing ‘binary’ PDF out as a .mars file. These .mars files are like .jar or .war files: manifested, structured ZIP files. Looking inside a description of a page, you have an SVG Tiny 1.2+ (as Adobe state, SVG/FSS0 representation. The specification clearly documents that .mars takes the current concept of PDF, a document format, and extends this as XML.These technologies do not directly intersect: an XML representation of SWF rather than PDF would be closer to XAML. Having cross-platform viewer support for Microsoft’s XPS would be closer to PDF.

I was premature in saying SVG was deprecated.

Vista RC1 OK on Parallels 1896.2 (and Acrobat 8)

Watching the Parallels web site, I noted that the engineers had posted some more info, and a later build. 1896.2 I don’t know what the .2 means; probably that .1 wasn’t quite right.

Waiting for a better video driver (to use up the 256Mb of the MacBook Pro, without resorting to Boot Camp)

Anyway:

Vista RC1

Is Vista RC1 build 5600 installed and launched OK. Office 2003 installed perfectly on RC1; now I am hunting down an installer for Office 2007. Dontcha just love software?

Beta Technical Refresh 2 on Beta 2 on Release Candidate 1 on build 2 of Release Candidate 2 on MacOS 10.4.7. Schwarzwaelder Kirschtorte.

Speaking of cakes, Acrobat 8.0 is announced. I don’t have Acrobat 8 in any form, so I cannot add the cherries.

Watching the Language Wars

Today, at least in the US, it is Programmer’s Day.

Maybe it should be called “International Programming Language Peace Day“. The level of advocacy for various programming languages reaches rhetorical heights last seen during the one of the not-so-successful 18th century revolutions.

When not speaking to humans, other programmers to reading the latest advocacy on their language of choice: programmers stitch together the wild thoughts of others to munge data into information.

Programmers are the people who use computer languages, in their various forms, to get computers to do cool things. From blikenlights to cool online maps: there are a pyramid of programmers responsible for your computer experience. A programmer is behind the “ding” in the lift you used this morning; and the software that validated your ticket on the bus ride to work.

The beauty of computer languages is that they never seem to stagnate: like modern, spoken languages: they evolve as the world changes. Except those that are abandonware.

Microsoft has recently released my current favourite programming language, Python, as a CLR/.net language: IronPython. This implements Python as a dynamic language on the CLR engine.

C# is the language of implementation for CLR, as is Sun’s Java is for the JVM. A# (Ada), B#, D# F# (OCaml), G# (Generative language), J# (Jsharp), P# (Prolog), L#. More sharps than Beethoven.

The language wars has returned to an old field: dynamic languages. The grand-daddy of dynamic languages, LISP, has received some recent positive PR. One person, Paul Graham, is the poster millionaire for LISP. Lazarus of LISP.

This week, Sun Microsystems parried Microsoft’s IronPython by hiring the team behind JRuby. The aim here is to implement the Ruby dynamic language on the Java Virtual Machine (JVM). Some months ago, this team was able to get a Ruby on Rails working on the JVM.

Whilst the big language guys battle it out, is Erlang the next Ruby, or is it just a viking proto-language with the best non-pun name? The Erlang community is starting to come out of their telephone exchanges.

No language has deemed to have arrived in the 21st Century until there is a web framework written around it. C# is ASP.NET, Python has Dyango, Ruby has Rails, Erlang has Jaws, Scheme has Magic… and so it goes on.

This broken thing called Javascript that has been reborn with AJAX, and is receiving daily blood transfusions of new features.

All of these languages just remind me of my personal alltime favourite language love of my life: Hypercard’s HyperTalk. As Hypercard is no longer sold, and “Classic MacOS” is a battle to get going on my MacBook Pro – sadly it is a language as useful as Cornish.

So, for a short period of time it is back to one of HyperTalk’s children: Applescript. Basketweaving for the mind.

Parallels 1884 Vista Quick Notes (and update)

Download the 21Mb update to Parallels (to build 1884)

Boot Windows XP to ensure all is OK before I install Vista. Windows XP “seems” to boot a little faster. Unable to quantify exactly how much.

Backup existing 15Gb Windows XP .hdd, just in case. Create a new 15Gb image to install Vista into.

Pararllels settings:

Parallels settings

Install into the fresh 15Gb image, 1024Mb of RAM allocated to image. Vista is marked at (experimental) as OS. Installing onto a MacBook Pro with 2Gb of RAM and MacOS X 10.4.7

  • Beta 2 Build 5384 DVD (thanks, Frank Arrigo at Microsoft Australia)
  • Started install at 11:05am
  • Vista install auto-restarted at 11:35
  • Vista install auto-restarted at 11:43am
  • Questions (location, time, username) at 11:46am
  • Vista install auto-restarted at 11:47am
  • Into Vista Beta 2 at 11:50am
  • Install Parallels Tools from the Parallels VM menu. Note that these don’t seem to be signed drivers, so ignore all the warnings and install away
  • Manual Vista Restart
  • On restart, if the “Welcome Center” doesn’t appear, choose it from the Start menu. Click on Add Hardware.
  • Vista found network card, and automatically configured network. Also note that Vista also finds “PCI Bridge Device” which I asked Vista to ignore
  • Restart; Vista found network card, and automatically configured network. Note that the Network Adaptor settings for the Parallels VM set “Bridged” worked OK

In short, it works. Note that I haven’t stress tested this; and the Parallels guys say its experimental. Beta OS on experimental hypervisor virtualization. Your mileage may actually turn into inchage quickly.

vista login

Vista Desktop first questions

RC1 Note from 8:20pm

You cannot install Vista RC1 on Parallels. Bugger. ISO, DVD burnt or upgrade from Beta 2 to RC1. None of these paths work.

***STOP: 0x000000A5 (0x0001000B, 0x50434146, etc)

The ACPI Bios in this system is not fully compliant to the specification. Please read the Readme.txt for possible workarounds, or contact your system vendor for an updated bios.”

FreeDOS and Parallels

File this into the why basket.

freedos

FreeDOS works with Parallels. So now for the full 1987-1992 retro-experience, the MacBook Pro can learn about HIMEM.SYS, FAT32 and other evil that Windows has shielded us from.

How to:

  1. Download FreeDOS ISO image
  2. With Parallels, create a new VM (virtual machine), Hard drive
  3. Set the CD as the boot device, and select the VM
  4. Start the VM
  5. Follow the onscreen install instructions: note, be careful erasing your hard disk image!

The VM settings screen will look something like this:

FreeDOSVM

Gartner Agrees with nickhodge.com

Windows Vista the last of its kind: Windows will go virtual, Gartner agrees with my assessment that the future of Windows is componentised, virtualized and smaller.

Gartner expects a significant update to Vista in late 2008 or 2009 that will add virtualisation (in the form of a component called a hypervisor) and a service partition.

You read it here first, 4 days ago.

Virtually Emulating First Loves

In an effort to re-ignite my first love whilst on my leave of absence – I’ve been looking for a good TRS-80 emulator to rekindle the flames of technical desire. Also over the last 4 weeks I’ve also had a small “side project” watching the goings on in the desktop virtualization space, especially on the Mac. Parallels has been an excellent investment to get Windows XP running on the MacBook Pro; just waiting for the ACPI/Direct3D (or VMWare for the Mac) version so I can run a build of Windows Vista.

Admission #1: the first computer my dad purchased for me was a TRS-80 Model I. Not the prettiest, nor the most powerful of machines – 1.77Mhz with 16Mb Kilobytes (I even accidently put Mb!) of RAM. Welcome to 1981. That’s right, 1981. 25 years/ a quarter of a century ago.

The best emulator for the TRS-80 is written by Matthew Reed. Found thanks to
Ira Goldklang’s TRS-80 web site. So, I have TRS32 running inside Windows XP in Parallels on MacOS X. Shells within Shells.

Quest for the Key of Night Shade

Admission #2: the TRS-80 we owned stored data onto a cassette, not a floppy disk. Way-back when I was one of those computer-store kids. Thanks to the sales guys at Tandy Electronics/Radio Shack, we’d spend all day sitting on the computers typing in programs and occasionally demonstrating to prospective buyers. As floppy disks were expensive, we didn’t get access to storage – so TRSDOS was not an environment I was ever exposed to. Getting the emulator working involved remembering how to get BASIC working, and learning yet another OS.

Admission #3: I’ve watched zero minutes of Lord of the Rings. Even from DVD. Ever since the school librarian suggested I borrow The Hobbit, attempting to read a single page, and quickly returning the mush – I’ve actively avoided the fantasy genre. World of Warcraft drives me nuts. Sorry Neil and Mark!

Before this dispassion arose, I did get into one fantasy-style game on the TRS-80: “Quest for the Key of Nightshade”. It is strange how you remember names such as these for many years. Last week I found a version of the BASIC program, originally typed all the lines from a computer magazine into Basic and saved to cassette, on Ira’s website. From memory, this was written by a Canadian programmer and won “TRS-80 game of the year 1981” in some US magazine and was reprinted in 1982 by Australian Personal Computer.

The screen dump above is from this game. Ahh, the fond memories of our first loves.

Being the Forest, Forgetting the Trees

Microsoft is on the cusp of shipping a whole forest of new products. Vista, .Net 3.0, Office 2007 and *.live.com stuff than you can poke a branch/stick at. All of which presents Microsoft with some tall challenges. How does a single tree get noticed? How does the world find the saplings that are going to be the next Sequoiadendron giganteum? Does the forest work together as a cohesive eco-system?

Today, thanks to Microsoft Australia’s, Frank Arrigo, I attended the Blogger’s Brunch. Great of Microsoft to reach out to a section of the local technology blogging community. None of the attendees (except Angus Kidman and Nic) are famous in the blogosphere, but on the internets – noone knows you are an Australian.

Whilst having been a Microsoft customer since 1984 (Microsoft Basic 1.0 on a Macintosh 128K – and the box is in storage somewhere), I am a relative noob to “marketectural” Microsoft. The speak is strangely familiar to my ears.

The following are some random thoughts and un-expressed questions from this morning’s session:

  • To the Microsoft PR people. Sorry it paralleled Microsoft-Groove/Ray Ozzie history with Apple-NeXT/Steve Jobs. To Frank Arrigo. Sorry I stated that the *.live.com people are having fun being compatible with all the versions of Internet Explorer rather than implement Firefox support. Both of these were intended as jokes, not memes.
  • Today’s Australian Financial Review’s IT section has quotes from various large Australian financial organisations stating that they are taking a wait-and-see approach to Windows Vista. Some are only now installing Windows XP. These organisations state they will install Vista in 2-3 years. I find this quite interesting as it has taken them 4-5 years to install Windows XP. Personally, I am concerned if a large financial organisation is not running a recent, up to date, tested and secure OS on all their desktop computers. I’d love to know what features in upcoming products are direct feedback from Australian customers. This would show that the software development process is a two-way street.
  • Sharepoint should evolve into *.live.com server for the Enterprise. If Vista has all the hooks, and the connected/disconnected world and new applications are going to be mashed (lashed?) together with live stuff, this seems like a logical move. However, large organizations will be reluctant to put all their data into the world’s cloud for all to stumble upon. I am no expert on Sharepoint and all the positioning stuff, but it seems there might be a little “tension” (not a bad thing, mind you) between these two environments.*.live.com is garnering the mindshare as it is new-ish; many of the APIs and licensing models are to be determined. Come to think about it, these are probably the two reasons why they are still separate. Revenue and developer penetration.
  • After hearing about the IT professionals fawned over the coolness of Vista infrastructure deployment … I left the session (both mentally and physically) asking “what are Microsoft’s customers going to do with all these fine trees?”Customers doing meaningful stuff with Microsoft’s software so that they can impress their customers is where it is at. Marketing people might call it Unlocking the value of the platform.
  • Virtualization on the desktop has been one of my “things” for a while, so it’s interesting to hear that VirtualPC is to be included in the Enterprise version of Vista. Whilst listening to the intricacies of Vista vs XP deployment, my mind was racing thinking about the future of operating systems.So here goes: why is the Enterprise desktop so fat? Why not have a Singularity-based OS with .Net 3.0 Framework as the API. Win32 + other legacy apps could be virtualized to the desktop. As the world and work becomes more connected, the smart client at the edge of the network will have a different face.

In summary, I groked that Microsoft groks (sorry, Heinlein) the world as it exists today. Ensuring that no trees are felled in the rush to market is going to be an interesting challenge.

One Mac Head, Two Minds

An excellent article from the New York Times: Weighing a Switch to a Mac. Interesting, as it goes through the two options: BootCamp or Parallels.

You don’t need to leave your Windows-mind behind when switching. Now that I am disconnected from the Adobe-mind, I rarely use Windows applications. But then again, I’ve not really done much in the last two weeks apart from fill this blog up with stuff!

Our Virtual Future: There are Cycles to Burn

Just over the digital horizon, your Apple MacBook will boot multiple MacOS X 10.x, Windows Vista and Linux/Ubuntu operating systems at launch. You don’t see all of their friendly faces, but they are there ready-to-go. Get-info on Firefox, and you inform the primary OS which of these operating system environments you would like the application to launch into. Need to run Outlook 2007 for your large organisation? No stress: its there, behind the scenes as you running Photoshop on the MacOS side. If your Windows instance crashes as you are testing a new application; it is shutdown and relaunched simultaneously. Far fetched? With Virtualization and multiple Core CPUs, no. On the desktop, usable virtualization is relatively new; what can we do with modern processors and software. There are cycles to burn.

Virtualization is nothing new in the IT world. Mainframes, the Crocodilian survivors of last century have long used Virtualization as a method of performance management and isolation. If you are login as user on a mainframe in, your session is a virtualized instance of the whole operating system. This isolates you from other users, protects the whole system from strange things you may do: like runaway queries. SELECT * FROM INVOICES;

Micro-processors since the Intel 4004 have survived with a single Central Processing Unit (CPU) that stepped through the commands in strict military time as sounded by the clock. We started with Hertz, then x1000 to Megahertz and now x1000 again to Gigahertz. A big leap has been the multiple “cores”, or separate CPUs added to the Intel processor line, with the VT-x,With multi-core processors being the current “thing”, and with Intel talking about their processors having upto 32 cores in 2010 – performance is going horizontal (more CPU cores) and vertical (Ghz Clock speeds). The drummer in that military band must be getting sore arms beating that fast!

PowerPC and other RISC-like processors started to split their processing commands to internal co-processors. This is like delegating the “difficult” jobs to the specialized underlings. For instance, Maths to Floating Point Units (FPUs). To gain performance during the latter days of the PowerPC, Apple started adding multiple processors to split the workload. Smart software could take advantage of these to speed up heavy processing tasks.

Multiple Cores in the same processor is bringing this same philosophy to everyone – including low-end MacBook laptops. Few applications scale well into multi-cores: one that does is Virtualization.

How does this related to Virtualization? Now we have multiple cores, one potential use is to run separate operating systems on these multiple core CPUs; and each operating system running at full speed. Veterans in the Mac world would remember the first versions of non-Microsoft VirtualPC. This software emulated (and dynamically recompiled!) the Intel processor commands on PowerPC like a human language translator. Translating takes time, and therefore the operating system just didn’t feel snappy. Now on we have Intel multiple cores, the OS world is a user’s oyster as the translation is no longer required.

As Virtualization is in the hardware, the software that has been unique can be commoditised. This has drawn WMWare and Microsoft into a war for the “Virtual Server Platform”, and the first battle on the desktop to grab the hearts-and-minds. The dollar returns for these organizations lies with the “ownership” of the server platform.

Microsoft has released VirtualPC 2004 as now a free product, with VirtualPC 2007 also following for free. This is a parry to the VMWare thrust of WMWare Player. VirtualPC comes from a product started by Connectix. Announcement made at a recent MacWorld that Virtual PC Mac would support Intel based Macs in the future. The Channel9 Team at Microsoft have an interview with the engineers of VirtualPC. This provides an excellent backgrounder on the history, and technology of virtualization.

EMC, and specifically their VMWare division, has free versions of VMWare Server and VMWare Player available for free. Conveniently, VMWare have links to pre-created VMWare images, or “appliances” as they name them.

Server Appliances: one of the VMWare images available is a Firefox Browser appliance.. A self-contained operating system with user interface (Ubuntu) and Firefox – pre-installed, ready to go. We are all waiting for VMWare for the Intel-based Macs.

The recent scrabble over the Xen environment for virtualization between Microsoft and VMWare, and recent comments by Novell are an indicator of the importance of server virtualization. Many large server hardware companies must concerned about the impact on their business as server hardware is consolidated.

Server, large systems, IT shops with many physical servers and the need to support legacy server applications benefit from Virtualization today. How about desktop environments?

One of the first applications I installed on my MacBook Pro was Parallels Desktop for the Mac. There are a few Windows-only applications I need to use; and within 30 minutes I was sold (and yes, I purchased a copy). I use Windows XP daily, Parallels has been useful in seeing what this commotion about Ubuntu has been about, and doing a test install of Debian prior to production install as our home server.

10 Ways Virtualization Will Change Our Digital Lives:

  1. Critical Application isolation In our daily computer-lives, there are applications that are more missing critical. Imagine you have a soft-phone running via VoIP. You really want that application running, rain hail or crash. With Virtualization, you could put that application in an environment that is isolated from your World-of-Warcraft environment. In fact, you have a “play”, “work” environment to separate your digital life.
  2. Holistic Backups Backup your VM, and therefore your whole environment. I remember the first time I lost a hard drive (an amazing 100Mb external SCSI drive on a Mac II) and lost all the data. Bourbon and Coke soothed the loss; now I have a laptop with 100Gb. Learning from the original loss of data, there are a myriad of backups lying around: burnt CDs, DVDs, Debian server with 450Gb, some data inside GMail. However, the investment in the setup of my machine is worth 4-5 working days to become “online” if I lost the hard drive. With an image of the think investment in the setup, let alone the data.
  3. Pre-setup Server Platforms:A project like Ruby on Rails. To install this beastie, you have to install Ruby, Apache, MySQL (or Postgres), the Ruby Gem remote installation/dependency system, Rails plus a few Ruby hooks to connect all this together. For the faint of heart, or non-OS tweaker – its a major and potentially impossible chore. Deploying this in a development/test/production cycle increases the complexity by an order of magnitude. Servers, and server software is just too darn complex for the average developer.

    Instead, if there was a lightweight OS with these pre-configured, installed, security checked – a developer could simply download the VirtualPC/VMWare image and start.

    It would be cool if organizations that sold/developed server tools provided a “pre-installed” image rather than an a myriad of inter-linked installers and required dependencies.

  4. Lightweight Server operating systems, or based “framework” on which “virtualized” systems can be created and easily deployed. Guess what? there are these environments for Linux and potentially for Windows.

    Rather than building a large, complex OS with multiple software sub-installs, commercial server applications should move to a VMWare/VirtualPC image for double-click launch-and-go.

  5. Deep debugging: imagine an instrumented, or as Microsoft call it an enlightened OS. Run the VM, Record all the commands until crash. Rewind, Replay.
  6. Virtualizable OS for Test/Development; or Quick OS Undo: launch a VM and test an application and keep it separate from your working operating system. If something “breaks”, throw away the VM image and restart. Keeping clean images of various operating systems in “cold storage” permits quick resetting back to a baseline.

    In a recent podcast Ruby on Rails interview with : “Its about breadth, not depth”; as processors are starting to go multiple core, applications need to scale horizontally, not expect more processor speed to magically work. This has an impact on Dynamic/Scripted languages being deployed as web applications. Also SOA is about these applications connecting to each other between different servers.

  7. Hosted Servers from Virtualized Images: grab a “copy” of a virtual server from your new hositng provider; locally test/install & and then remotely deploy and start. After writing this, I notived that TextDrive is offering Solaris Container style serving.
  8. Clone + Clone back: Take an image of your current VM OS configuration. Make some change (install an application or OS upgrade), and binary re-merge the now modified clone back onto the original. This is probably more difficult to do than think through, but a well architected OS could permit smaller diff style changes.
  9. Why re-invent the wheel? Large applications have large problems; saving files. foundation: with IO/MM and low-level access to the hardware, could a large application run in its own operating system? Separating the IO/MM out of the large framework into a well-tested OS results in re-factored frameworks. Higher level applications really don’t know they are in a virtual environment.
  10. Network Virtual desktops. A twist on the Citrix environment, but the CPU in use is on the desktop, file image on the server. VMware is a part of a gorup working on desktop VDI. What we need is a small, easily Virtualizable (and pre-installed image) MacOS X and Windows XP/Vista.

Virtualization, as the processors go madly horizontal with multiple CPU cores, is going to be a large part of our future: not only on the server, but also on the desktop.

Download Parallels (if you on an Intel Mac) or VMWare Player / Microsoft VirtualPC and give it a whirl and experience the beginning of our virtual future. Burn those Cycles!

Bedtime Reading