SEO for WordPress

I’ve been working a little on some SEO at work and learning quite a bit with our resident expert.  It’s kind of funny because I’ve written SEO pieces for a few apps (metadata, deep linking, canonical pieces, under webs etc.) and still as a developer its a whole practice, like doing cms/edm, or service buses, or GWT, or JSF , or Spring-Batch, or Drools/ILog etc.   Just because you know Java, you might not know a practice.

Anyway I read several compelling arguments about WordPress links.  There are probably a few plugins but you can just use the canned permalinks feature in WordPress to make the links “better.”  The arguments from all sides boil down to a balance of these possible requirements:

  • Performance:  Apparently site links that are more textual/semantic like <site domain>/category/article name poor in performance.
  • Usability:   If you have the canned WordPress link structure with the post id <site domain>/?p=123 ; this is definitely not user-recognizable.  In this case the semantic approach helps a lot.
  • Traffic strategy:  If you include a date path like a lot of  sites do, people might not want to go to your site if they see the article is older.   For instance a common link structure like <site domain>/year/month/date/article name might be what you want for news, or not for more timeless content.  This might be moot if a search engine is showing the date of your article anyway in the search.  I heard that if you include this in the link then it might not.
  • Search engine rules:  I’ve read that some sites like some Google services want a digital id in the URL.  (Understanding search engine rules is like divining for water half the time; the SEO people always stare up at the ceiling and say “it could be like that, yes, perhaps.”)
  • Web Analytics.  if you have unrecognizable URLs and business people are sharing your web analytics reports then you are definitely going to want something semantic in the link.

I’ve chosen <side domain>/post id/article name for my sites which is a standard strategy.   I went this route for analytics and search engine optimization.  The most common link format seems to be <site domain>/year/month/date/article name but I don’t want any dates in my URLs.

The Base Nature, of a Technology (But Not Necessarily)

A long time ago I was in one of those corporate rah-rah training programs that dealt with teamwork.  One of the exercises was putting together a difficult puzzle as an individual while team members watched, knowing the solution, but could not help you at all.  One particular person we got to watch absolutely could not rotate a block to solve a puzzle and the amount of frustration we felt was incredible.  But moreso, was watching how the poor puzzle solver was locked, locked into a solution they tried over and over again, and kept failing over and over again.

I see this often in the real world.  For instance, many developers come onto a new project and immediately criticize the frameworks or implementations having never mastered any of it.  I saw a developer who had not even gotten a running instance up insult the MVC framework of an app just recently — even though he had no experience with it and had just recently started to read about them.  Many people criticize technologies like Flex or relational databases having never ever written line 1 of code in them.   The same certainly applies to management.

Recently my team was told to use JIRA as a PM tool — not just as a bug tool, but also as a PM tool.   I thought, what the heck why not.  I have extensive experience with JIRA as a bug tool; Rally,  XPlanner, and to lesser extents Mingle, VersionOne, and Pivotal Tracker as PM tools.  Of course MS Project Manager (the goal of all these hahahaha  🙂  ).  In a later article I will comment on JIRA used in this manner, or maybe if we get Greenhopper — and believe me it needs commenting.  But for now we are using JIRA as a bug tool.

What amazed me was that when I sat down with the product owner and the PM, they immediately wanted to change the tool, start adding fields and workflows.   Neither had used JIRA before, neither had received training.  While its true they had their organizational knowledge, and that cannot be understated, it was very interesting to watch people project their expectations onto a tool without even knowing what it was.

The only thing to do was to actually look at the tool, hear experiences, and gather requirements for our needs.

For instance the PM wanted to use a date field as a “hand off” field for task people — but the workflow already handled this.  The product owner had an interesting view on the status values for his purposes, not even having looking at them!  I explained OK, maybe we could do that, but it might break its usefulness for the task people.  We had to figure out the scope of each activity as it pertained to whomever would be using it.

We started to review each field and what it could be used for, and how it might fit our project.  Then we looked at the flow and statuses.   I explained to them how I had used this tool (quite often) in the past and why those flows or statuses were there, and if we changed them what the implications might be.

This has happened quite a bit in my career — management teams and builder teams have different flow and status needs.   They cannot be merged into one, they can only be layered.   This has to be called out or someone will be very hindered or have to do double entry of status somewhere, which defeats the purpose of using tools — saving time.

In the end we agreed to try it out in its vanilla state and incrementally recommend change.  The idea of process that changes over time was kind of new to them, but they saw the value in it right away.  We’ll find the balance.  Also, we are hindered because the entire company is using JIRA so addinng a field might not be such a good idea if it only pertains to us (since everyone else will get that field too).  The super-high level people peeking in don’t want that.  SCOPE!!!!

This hasn’t always went well at places I’ve worked.  I year ago I noticed a huge breakage in our flow at a place I was consulting — basically we needed a build number in our stories so we could track checkins for the QA and BAs.  There was a whole huge manual workflow around this that wasted time and cascades even up to what was deployed at the release level.   A simple field would have solved this.  But the director saw no use in it for his own needs from the tool, so it was never added.  Que sera, sera.

The lessons:

  • Learn the base tool before making use decisions.
  • Training sure would help.
  • Know your own flow.
  • Be ready to change.
  • Make sure you gather requirements and use cases for all concerned parties.
  • Don’t use a tool if someone will find it burdensome.
  • Keep a flexible mind, because your method may not solve that puzzle.

Which is more Open Minded: Flexibility, or Non-Flexibility?

I’m kinda getting tired of the Mac crowd.

BTW — I am typing this on a Mac.  I use a Mac because that’s what I have at home.  It’s OK I guess.  For work I have a Windows notebook.  It’s OK I guess.

Mac people definitely look down their snout at anyone else, even Linux users.  But what kills me about it, is they invoke strange religious-like ideas about “the computing experience” and “truly being freed from (Microsoft usually).”  In the same breathe they will say Windows machines are too flexible, require too much customization.  Then they’ll say they are freed up to do what they want on their Mac yet support the idea that you cant open up a new Mac without a special screwdriver to change out simple hardware pieces.

its all very strange and frustrating to me.  It reminds me very much of the Harley Davidson crowd who ride the motorcycle for its looks, claiming its the best; but its the cost that really sets them off and what they are into.  The social status.  And, BTW, I ride and Ultra.  It’s OK I guess.

So being that Macs are less flexible, how does that promote more open minded thinking?  Right, it doesn’t.  Less flexibility cannot mean more open minded.  Often Windows users are called “sheep” — but its Mac that tries to ban things from its “experience” like Flash.

Recently I even had words with an Applyte about iPods and iPads.  I stated that they weren’t the first, just like iPads weren’t the first.  That Apple pioneered marketing, and in some cases usability.  But there were Archos and Creative mp3 pods long before Apple.  And Windows Tablets long before the iPad.  In fact, there were a lot of things way before the iPhone.  Treo’s and Blackberrys and Palms, Oh My!

So I guess if I am going to be forced to pick sides, I’ll say I am a Windows/Unix guy and that a Mac is just an iteration of a unix machine and that’s all.   The UI is a bit clunky kinda like Gnome.  They are all OK, I guess.  A a developer I keep my skepticism of everything.

But I WILL admit that Apple sure sells a great snakeoil.  Its very compelling.   They are good at figuring out the 80% user experience; and if not for the cost of their goods many would rightly migrate to them for now.

However I wish the attitude would go away.  I was working with CHEF, trying to do some Amazon Ec2 config stuff — and there was not much on using a Windows machine (that my company gave me) to install and use CHEF for Windows.  The Wiki went out of its way in fact NOT to mention windows.  WHY?   Is that funny to lock 90% of OS users out of your product?  Or is it “good business”?

I hunted around for the numbers BTW — here they are from Netmarketshare dotcom:

                         WIN       MAC    IOS       LINUX     JAVA ME  OTHER

April, 2010 91.46% 5.32% 0.69% 1.05% 0.79% 0.70%
February, 2011
89.69% 5.19%
1.81% 0.92% 1.04% 1.35%

Interesting — the Mac share dropped from last year!!!!   I expect Windows to drop becasue maintaining over 90% has got to be near impossible with Android and Ubuntu etc., but who would have thought?

Guess I’ll just take my medicine when I am around the  Mac Purists.  After all, I never argue with the Watchtower peddlers at my door.  its just better to remain flexible.

The New World of a Project Manager

I’ve been re-reading the classic Mythical Man Month and working on a new team with a PM inexperienced in my field and have come up with some compelling observations about how things have changed since the first iteration of MMM had been written in 1975 — some 36 years ago.

The most obvious is that back then PMs were domain experts.   They had been programmers and were explicitly involved in more of the technical decisions.  If you look at the structure suggested in MMM there’s a definite hierarchy that puts the PM, most of the time, at the very top and the Technical director, most of the time, below.  And back then most PM’s, having been technical, could do that.  They could understand, share, or drive the technical vision.

Things have changed a bit.   Just from the mid 1990’s the certification for a PM has went from a simple “come in and take a class” to  needing 3-5 years of proven pre-experience just to get the certification!  Depending on the cert.   Technical schools offer project management career tracks.  What this all points to is PMship as a profession, not a career level in a technical path (although it could be).

PMship as a profession means that you never have to have been technical to do it.  Its a separation of concerns.  On large projects especially its a full time job to develop the PM artifacts of missions statements, sla’s, schedules, communications plans etc.   It takes a lot of time. Professional PMs are better at PM-ing, although at a cost of technical savvy.   The industry has decided this trade off is OK, or there wouldn’t be these types of PMs.

I have worked with a lot of these new types of PMs and found them effective once they realize that they are accountable for their own work , and driving the schedule/communication gives them the management tools they need to drive a project.  There is dissonance created too.  Non-technical PM’s sometimes use their organizational weight to dictate  design or prevent necessary technical activities like re factoring or proper testing.   It puts the technical teams’ credibility and accountability in jeapordy.  Nothing drives me more crazy than a PM that has never been a developer (something I see a lot of now) thinking they can dictate tech actions from derivative experience.  As a tech leader I will advise them of estimates and outcomes; but just as they get to consider (only) my advice for PM issues, so too I have to consider (only) their input for technical issues.  The big thing here too is that my sniffer is on for “heads you win, tails you lose” situations.  This is where people are put into situations where they have no control or input (like design decisions or scheduling) yet are held accountable for the outcome of bad decisions up on the chain.  A good project structure can help insure this never happens — thus avoiding project failure, or unhappy and migrating team members.

How can we work through this?  We have to all learn that things these days are done in parallel.  We all don’t have the same skills and have to trust each other.   And leadership isn’t appointed, its earned.  Management IS appointed though, and management is as much about service to the team as anything else.

What I do on a new team is drive the accountability artifacts until its where I want them.   For instance, if there is no schedule, then I make estimates, and give it to the team.  The PM can then take it as a starting point.  Any of the artifacts, we can all do that — just turn them over.  If I am missing a critical state chart and someone cranks one out then by all means let’s use it.  Let’s put it in the Wiki or document storage; we are getting paid to make software not compete with one-another.  Once these tools (wikis, bug trackers, documents) are in place its amazing how the roles fall into place as well.

Developing/Installing apps on Android Atrix

I just bought an Atrix from my carrier, AT&T.  It’s a groundbreaking device: dual core processor, 1 gig ram, HDMI port.  The battery life is stupendous.

Anyway, first day I made a simple app from a sample I found.  You can use either Eclipse, or NetBeans to make an Android app.

Here’s the developer setup for making an Android app for Android 2.2 (Froyo I think):

  • Unzip the Android developer toolkit on your machine.  Start up the “android” executable in /tools.  Then, from that, you can install the needed Android SDK and Android SDK Tools for your version of the OS.  For me that was android-8 sdk and revision 10 of the tools.  Install the samples for your version too.
  • Create a virtual device based on that from the Android GUI.  No advise on that.
  • Install an Android plugin into your IDE.  Both Eclipse and NetBeans have them.  Sorry no links but they are easy enough to find.
  • Create an app, run in the emulator from your IDE via the plugin.  Very cool.

I made a small RssReader app  (thanks to automateddeveloper.blogspot.com) and loaded it here:  https://bitbucket.org/ivystreet/project/src.  This project is an Eclipse project, but I will be making a NetBeans project soon for another company I know as a marketing treat for them; hopefully its a bit better.

At first I had a little problem loading the app (bundled into an .apk) onto my Atrix using adb.  “adb” is the android app manager over a USB cable, command line utility.  But actually, the solution to load apps on an Atrix via USB was simple:

  1. Go to Settings–>Applications–>Development and turn on “USB Debugging.”
  2. Plug in your Atrix via USB cable.
  3. Go to Top Menu Pulldown–>USB Connection and select “USB Mass Storage.”
  4. In a terminal type <your path to>/adb devices.  You will see your Atrix listed.
  5. In the terminal type <your path to>/adb install (path/name of app).apk.

That’s it, you have installed your application.

Some sites want to call this “side-loading” or “hacking.”  I kind of think the word “hacking” is something the newer dynamic language developers use in a different context than a seasoned Java developer like myself, and those terms don’t imply doing something you shouldn’t be doing anymore so don’t let them scare you.  Using adb is a perfectly fine method to load apps and absolutely necessary for an Android  developer.

When it’s in DEV, it’s in DEV, dammit

Last week I was faced with a dilemma I run into on projects that are just gearing up their process.   The dilemma may sound simple, but it this:  when is it OK to log defects?

Now some of you may think this is a stupid question and answer “ALWAYS OF COURSE!”  But I heartily disagree.  Because BEFORE we start logging defects, we have to ask the question:  what are we trying to accomplish with this phase of of the software?  If the software is still in DEV — that is, its no finished yet, then what does a logged defect really mean?  And who can the logging damage?

OK, let’s outline what I see as the problem once more: Logging bugs on work that is in developer progress is a hindrance because it is not finished — the developer is still implementing the business requirements so bugs on this are basically moot.

On last week’s project me and the UI guy had to release something on the rarely-used STAGE server as a DEV server because we are also getting the who lifecycle environment up and going, and we had no proper development environment to deploy to and bang out our fleshing out and development of a major rearchitecture we are working on.  Also, the data environment had not been refreshed in at least two years!   The UI gy started to ask the biz team to start banging on it but I said WOAH WOAH WOAH cowboy — hold on a minute.   We aren’t done yet!  Logging bugs on an unfinished piece  is like telling me that flour doesn’t tasted like baked bread.   The repurcussions for logging bugs out of context are this:

  1. Time lost investigating bad data problems that are just dead ends.  Like old/out of context/non-refreshed data.
  2. Statistics that will be mined (out of a bug tracker) by people who do not understand the context — this is tantamount to lying with stats.   Logging and “fixing” (basically finishing a story) bugs on unfinished software and using this to show how good/bad people and processes are is very simply NON-OBJECTIVE and UNSCIENTIFIC use of crap data.
  3. Team dynamics can be difficult when this happens.  Mistrust.  People holding things close to their vest.
  4. If you are doing TDD or BDD, all your tests break up front.  Seriously — you are going to log this as defects and fix them and close them as PART of your development of a new story????
  5. If a developer has to worry about this QA process, then why should time even be wasted sitting with the state holder fleshing out say a screen, if they are just going to log the missing pieces or code plugs as defects?

See what I am getting at?  Here are two more examples of this kind of probem — when to log defects — seriously impacted work on my teams at two different Fortune 50 companies in the last 10 years:

  • At one place an engineer and I were developing (with another remote team) the Hudson builds.  We were trying to also normalize the IDE builds (i.e. build button actions), the in-IDE ant builds, and the Hudson maven/Ivy builds.    Developing them.  On a development server.  But, the upper management saw fit to start sending out nasty “don’t break the build” messages over and over to everyone pointing at us core people.   It was nasty . . . we spent a ton of money setting of another system just to get out of this stream before we could deploy to the actual place we were supposed to develop this!!!
  • Over 10 years ago I had a manger who had came from QA who literally would start logging bugs on our DEV server, waiting until we completed the feature, then close them.
  • Another place we’d do screens (like I mentioned above) and if we dared release a small piece to our server, or our iteration was observed, defects would get logged on partially complete work.

I have some other examples but those are the major . . and now this at my current gig.

Some of you may think this is totally outlier but its not, not at all.  Some of you may say “well why not just log everything?”  And my answer is:  have you worked at many places?  Because if you have, you’d know the general attitude towards developers may be distrust — which is partly our own faults; you know, those cowboy coders who left everyone holding the bag.   And that stats can be used to show ANYTHING.

Case in point:  I worked at a place where by the management team’s own measurement we pushed out more features than any other team, but they ignored the very stats that showed it, that they told us to use, due to a built in site bias (i.e. all of the management lived in another city).  hahaha  serious!!!

My advice is this:  avoid situations where anything other than a rigorous process can be used.  It just helps everyone out — it creates those proper buckets and behaviors.  More feedback is better of course, but disallow in your Jira or whatever a bug log on DEV; or a metered bug log at worst.  But crank it up in the true QA.

Most places have minimally three levels to thier bug flow:

  • Pre-Release Defect Tracking:  DEV to QA defect process — when code is released to QA, defects found, and code fixed and release.
  • UAT Defect Tracking: QA to STAGE/User Acceptance.
  • Released Code defect tracking — code in the field

Each has a different scope.  If we are true to the agile ideas then all of us can take part in each phase — that is, al of us as a team; BA’s, QA’s, DEV, Owners can give meaningful feedback.  The power is in following a nice process and then re-evaluating that process.

Continuous Deployment Considerations

The other day I got to meet up with a UI developer who works on the sphere of HTML, Javascript, and CSS.  I am working on a back end that I inherited from another consulting company; its a Java stack and requires a deployment for changes.   The UI guy looked at me and asked about the large process we use to do a release, and he asked me a compelling question: “And you are OK with that?”  He is used to deploying UI changes at will from a CMS system.  It fits in the realm of “continuous deployment” which is the automated version of deploying-at-will.

Continuous Deployment (CD) is deploying new software changes as soon as they are ready to be deployed instead of waiting for a release date.

It’s a good question; and it also shows why doing software isn’t necessarily common sense because you have to dig to get the answer for this question.  I did a little reading and dug into my experience to think about scenarios as to why an organization could or even should (or not) do Continuous Deployment.  A lot of my development experience over the last 10 years is in health care, and this impacts my view on this subject quite a bit.  So here are my list of questions and why’s:

Definitions:

Critical systems: involve people’s actual lives, like health care notifications of conflicting pharmaceuticals, or power grid monitoring etc.

Non-critical systems: people are not as dependent and changes are not life threatening, i.e. a retail system.

  1. Is the system a critical system? If it is, continuous deployment is probably not for you because the chance of deploying an error and rolling back (if you can catch it after 4 deployments) increases with the deployments.
  2. Do you have time/resources to build and maintain the CD infrastructure? If not, doing continuous will be very time consuming.  You will want a repository, a CI build like Hudson up minimally to do this AND there are other methods like pre-check in code holder scripts etc.  This takes time and commitment and possibly another person to maintain it.  You might even need a cluster of machine instances virtual or otherwise) for the next consideration:
  3. Do you have a bank of automated QA tests? If not, if NOT, then you will probably generate and propagate a ton of bugs.  You need to test test test.  It is a requirement to do CD, not an option.
  4. What technology or separation of concerns do you want to/can continuously deploy? If its the base underlying framework, you might be in dangerous territory because one small change will cascade up to remote areas of the application — for instance another piece of the UI you did NOT work on could break.  Again, test.  Isolated UI pieces or isolated systems might be an easier place to do CD. — and that might be all the business needs for CD.  Another consideration is if you have a system with lots of dependents; i.e. if you are a rest services piece with 5 applications dependent on your service.  All these apps have to coordinate with your changes and that is absolutely not an easy thing to manage.
  5. Do you have a database/content object version system? You need to be able to roll domain changes back quickly, synced up with your code versions.
  6. Will training be required for the new versions? If so, then C.D. would be a bad route to go.  Imagine, having to learn a new piece of a system constantly as a nurse, or even as a book keeper at a retail store.  A system I recently worked on would deploy to many clinics and was used in exam rooms — tons of doctors and nurses.  CD was far too risky for this situation.
  7. Does you have a good  feature review process? If you don’t, its possible that business “prototype” code will get deployed and damage the real mission of the application.  You still need human eyes on the requirements and the ability to review the effects of change before deployment.
  8. Deployment dependencies large? If you have slow-pinging servers, or WAR deployments (vs. PHP changes which can be fast), tons of data services; well then your down time might be too big to start or you have to drop in a load balanced system to accomodate the time it takes to deploy and automate that.
  9. Can you even automate the process? If you do not have even a good way to drop CI in, maybe its time anyway to evaluate your system for new techniques.

The idea of CD is compelling. And I hope these questions shed some light on the larger implications and considerations of trying to achieve this kind of level of deployment.  But, especially die to the training/critical issue, CD is not for everything.

Portable Files, Online

I’ve had this problem for quite a while.  I use Zim as a personal wiki to keep all my notes for work and my projects (like an Agile book I am working on) and I like to version it.  Also, I need to access the Zim notebooks from several places (like different computers) and on different platforms.  As you can imagine, this has created a problem.  In addition, I don’t want any of this information public — its my private notes.

For a few years I toted around a USB stick.   I tried keeping it on my server space but it got tiresome copying/pasting files.  Usually, what would happen is that a company I was consulting for would have some security arrangement where they wouldn’t allow FTP, or SSH, or any access etc.

Finally, I settled on using True Crypt to store the directory of Zim files.  It works on Mac, Windows, and Unix-Linux.  True Crypt lefts me encrypt the entire directory and mount it as a drive.  So, my security issues solved, I tried different solutions.  One was storing the file in my BitBucket account until it got blocked at my current place.  But . . . they allow DropBox.

DropBox is totally what I was looking for but some places don’t allow it, my current place does.  it auto-syncs my True Crypt file.   I use Darcs for my versioning (super lightweight) and am good to go.  DropBox is also free for the first 2 gigs.   And, since you get local copies, you don’t have to worry about internet lockout in case you are in Northern Ontario writing a manifesto.

Here then are the requirements:

  1. Works on all major OS’s (Mac/Windows/Unix-Linux).
  2. Versionable.
  3. Encrypted.
  4. Accessible from anywhere.
  5. Relative ease of use.
  6. Cheap (or free).

And here’s portable encrypted desktop wiki solution:

  • Wiki: Zim.
  • Encryption: True Crypt.
  • Versioning: Darcs
  • Access from Anywhere: DropBox.

I’ll keep doing this until the next place (or current) chooses to cut me of from DropBox.

A Courteous Architect or Developer

I’ve been having discussions with developers at other job sites and listened as they tried to “make their mark” at the placer they were working.  One of the developers had dropped in a new templating engine into a significantly large site – that already had a templating engine.  This person did it because “that other template engine is garbage.”  He had been on the job two days.

Something that really bothers me about this industry are the amount of egotists we have to deal with.  This person’s actions — so quickly — show a massive ego and short sightedness.  I put some questions to this developer:

  1. So, now the other developers have two template engines to learn and deal with? – Yes
  2. Can the other pages be easily converted to this new engine? – No, due to business logic
  3. Can both engines use the same style sheets?  – That will be the challenge

Now a thousand developers and architects will tell you “I like to design/architect, to pick out the components and wire them together; then move on.”  Having been in this business a while, and knowing plenty of developers such as Template Guy, I can say with honestly that a very good developer or architect has the insight to pick out technologies that easily and efficiently achieve the business’ goals, don’t hinder development and maintenance, and are cost effective.

But to really do that, one has to put oneself in the others’ shoes:  and that means dropping the ego.

Sure, its definitely OK to be selfish and try out a new tech.   But after that, have the foresight into the future to see what will happen with your decision before you move.

Cases in point: on several projects I’ve seen  old simple dependency management systems l(directory from a code repository) get half-replaced with Maven or Ivy.  Now, instead of a single solution, there were two solutions.   Why?  Well, for one the developers who did the upgrades didn’t have the impetus to finish the job they started because it got difficult.   Also, they probably didn’t understand the full extent of requirements in the applications makeup and got stuck.  An in the end left a large pile for everyone else to deal with and declared their actions a victory of technological savvy.

Our egos are good things because they drive us to be inventive, especially when tempered with the ability to meet the real goals of the project.  Usually these goals aren’t for us developers or architects to best each other, call things garbage, or show our talent in a new technology.   It’s to make something useful for others to enjoy and profit by.

Mappletosh locking out more Custom Customers?

I just got a new HP notebook at work (its running XP).   No kidding.  Anyway, during the interchange-with-desktop services guy time, he glanced at my MacBook and then  he told me something quite interesting:

Apparently, Apple is putting special screws into all the external cases etc. to which only THEY have a screwdriver for.  And, if you bring in a legacy notebook, like my older un-warrantied MacBook  they will take the liberty of putting in these screws for you.   Customers, then, very plainly, cannot work on or upgrade their machines.  They will have to take them into the Apple Temple at their local mall to get them worked on.

Wow.   Wow wow wow.  No home-memory upgrades.  No home hard drive swaps.  If your keyboard gets hosed up — no buying one off the internet and putting it in yourself.  Nope.

And it’s all part of the Apple Experience!

Well, the Apple experience is starting to suck.  I don’t understand why the Applites all defend a huge, bigger than Microsoft company so much when they do things like this.   Or when they do things like not allow flash on iPhones.  All that.   Why the blind loyalty?

Being a developer, and a power user (I guess) here’s how my life works:  I want to choose what applications I want to use.  I want to choose my own experience.  When I get a computer I am not buying a speak and spell, all glued together and does one thing.  But that’s where Apple is going with it all.

People probably think “why so down on Macs?”  I’m not but, they keep doing stupid things.   Listen, Mac doesn’t make the best development IDE someone else does, so why lock it out?  Best desktop wiki — a company that makes Wiki’s, not Mac.  Best music production software?   Best browser?  Best multimedia center (even Windows Media center is better than iTunes)?   Not Mac.  Not Windows.

Linux figured out — we can just provide a solid hardware interface (called the OS) for the applications you want to run for you OWN experience.  Why is it then, Mac can’t learn what Microsoft and Linux has learned?  I don’t know.

But the funny part is that you get Parallels and  VMWare Fusion for cheap prices, and VirtualBox for nuthin’ — to run your Windows or Linux apps on Macs.   Pretty funny.

—-

This article was written in a Sea Monkey browser running inside Puppy Linux running on a MacBook running VMWare Fusion 2.03.