EC Tech Meetup March 1, 2016: Java Concurrency Intro – Layman’s Delight

Obtained a room in downtown Eau Claire over lunch to practice giving presentations (since I don’t do that much anymore) via a Laymen’s Java Concurrency Intro.

At a very high level presented threads, concurrency, locks, synchronization and real world analogies to these mechanisms.

Java Constants – Not In The Interface

Business rules? Hopefully centralized to the One; or life will be short and brutal.

I was working on some code refactoring on a massive codebase when I was faced with the issue of where to put constants for the default values of business rules.  This issue has been a constant subject of design discussion in java since I can remember, and definitely over a decade for myself.  Doing “object” code, you try your hardest to avoid big “global” files.  But sometimes you have to have a central, global location to put constants.  In my case, refactoring the same rule spread out over several classes each containing:

public static final boolean DEFAULT_VALUE = false;

So you can see the problem here — if this definition sits in several classes there’s a chance any change will be missed and you’ll have breakage.  There should be one definition to rule them all.

Now one place you *might* be tempted to drop your constants in is an interface.  I’ve seen this before, but is decidely in my opinion a bad pattern since interfaces are generally used to define relationships to other components and this could be construed the same thing as defining a complete method in an interface.  It was unfortunate, imo, that this had already been done in the code and it’s a good policy to match the patterns — even bad ones — until you can get around to refactoring them out.  I decided to refactor.  And the discussion of where to put Java application constants has been going on fever — for instance in this older  venerable C2 site  article –  interfaces for defining constance – c2.com.

Some other solutions in the recent codebases I’ve worked on:

  1. property files
  2. enumerations
  3. abstract classes
  4. static classes
  5. depending on the scope – class level

And that is the key  — “what is the scope?”  Global/application/multi-context scope becomes more challenging in design.

In many of the web applications I’ve worked on if we’ve used the constants on the backend, we’d make enumerations or a static class with the constants.  Then if you need to expose them, say to tag libraries or the front end, you can import them or make a service to expose them.  Enumerations lend themselves to this quite well.

The important thing is to get that default centralized if it is a business rule.  Multiple definitions of a default value, or any value, become problematic.  I’ve seen implementations of lists of statuses kept in string constants, enumerations, and a database lookup table all in the same application.

The company paid a lot of money to fix that.

EC Tech Meetup February 10, 2016: Java Lambda Workshop

Hands on Java Lambdas with some online tutorials.   Breaking in the new Mac.

A general Java code learning session at my work site with a few interested parties who wanted to see some Groovy too and share some Scala/Clojure.

Thanks to the Milwaukee Colectivo Coffee suppliers and my workplace!

So . . . Your Git Repo Moved

Sometimes “they” move the location of your git repository.  Seems to be happening a lot in my last few years of coding.

There are a few ways to deal with this moving, but it is important to remember that git is very, very good at a move situation since every commit is unique and it’s distributed nature lends itself to using different locations.

The DevOps team sent out instructions for us to deal with change, including “check everything in, make sure you are up to date” etc.  Prudent but not necessary.  They sent a command line method out for the change and even renamed the repository for some reason.

(That rename did not need to be done for our use case, but certainly you could. Although, since they did change our repo name *and* we are all still located in on our original local directory “oldreponame”  I am not sure what will eventually play out. Niiiice.)

Well here were their marching orders:

git remote set-url origin https://<username>@bitbucket.org/newrepolocation/newreponame.git
git remote set-url –push origin https://<username>@bitbucket.org/newrepolocation/newreponame.git

Always good to know git command line stuff, no doubt.

Myself? Well I’m a ragtime guy. I just opened up /.git/config and changed this entry:

[remote “origin”]
url = https://<username>@bitbucket.org/oldrepolocation/oldreponame.git
fetch = +refs/heads/*:refs/remotes/origin/*

to this:

[remote “origin”]
url = https://<username>@bitbucket.org/newrepolocation/newreponame.git
fetch = +refs/heads/*:refs/remotes/origin/*

Badabingo.

I didn’t tell DevOps though.

EC Tech Meetup January 27, 2016: Techniques for Finding Remote Tech Work

Tonight the meeting was held in the Madison/Waterloo WI area.  We shared notes, contacts and techniques for finding remote work.

General sentiments:

  1. Finding remote work is getting easier.
  2. You have to invest in some good gear for making communication easy.
  3. The work place in your house/coffee shop:  experienced developers thought life/work separation was more important (i.e. separate spaces); less experienced tended to mix them more.
  4. Finding a good recruiter was near impossible that you could always use, so spread it out.
  5. Don’t be afraid.

We found quite curiously more of a remote culture in urban areas (Madison, St. Paul/Minneapolis and Chicago — per our representatives at this meeting) vs. rural situations such as people Eau Claire or Watertown; the smaller more rural sentiment is to find long commutes/stick to the local jobs.   This is something we all did not foresee and was puzzling, perhaps culture driven.

  • Living in a rural area with less opportunity meant developers tended to try less at getting remote work?
  • However, people in metro area where there is more local opportunity the people were more likely to find remote work?

A good session.  Thank you to The Crystal in Madison for drinks later on.

Supportability

A Phonic Soundboard is not a Harley; maybe like a Yamaha.

If you ride motorcycles inevitably you’ve had to find parts.  One place to buy parts via catalog is Dennis Kirk.  If you get the paper catalog you have two choices:

  1. A very, very thick catalog for Harleys.  In this catalog you can find almost any repair part for Harleys dating back at least 80 years.
  2. Then there’s another catalog just about the same size “for the rest” – Yamaha, Honda, BMW, Suzuki, Ducati, etc.

Argue about what motobike runs and doesn’t.  Fine.  But the cold hard fact is, in the long standing supportability of motorcycles, no one can beat Harley Davidson.  You can find parts, hard to find parts, and keep your bike running.

By the way I am not a Harley advocate; and some models like Suzuki’s KLR and the BMW GS series have great parts support.  But nowhere near, not even close, to Harley Davidson.

This metaphor can be extended to many objects.  I own the Phonic Helix mixerboard pictured above.  It’s a very excellent board, the sound is great.  It’s analog — mics and instruments in (firewire too but I do not use that anymore).  Mine is about, oh 8 years old new?  Less than 10. But the knobs have a coating that has gotten sticky and the plastic brittle.  When I contacted Phonic they told me I was SOL – don’t make the board, don’t make the knobs.  I compared newer model knobs to what I have and they could work, they look exactly like the same knobs.  But I am guessing Phonic wants me to throw the board away and buy a new one rather than make money off parts.  Other boards I can buy in the range, or of course more expensive — Mackie, Allen & Heath, Soundcraft etc. — even the lowly affordable Behringer — all make spare parts for many years.  There is not really anything that could be added to this board I would need so why buy a new one?

And phones.  Recently I was reading a Washington Post article about a new luddite class that refuses to conform to planned obsolescence in phones.  For instance the constant network upgrading.   Well, the article gets deeper touching on neo-luddism and such things we already think about and other kinds of unwanted changes, and behaviros we already use to choose new tech anyway (like not phone-surfing sites with all the popups and slow analytics causing long page loads, etc.).  But certainly planned obsolescence is a thought when buying a more powerful device.

Things can still work.  I’ve repurposed old wifi routers to gateways for a few small business some years ago; I still have a first gen Roku, still working, receive “threats” from Roku with coupons to get a new unit — for more capability.  When I upgraded my Android phone it was because I had my old phone so long it wouldn’t run any new apps due to hardware and OS restrictions — but it still called and texted.  I bought one that would take a few generations of Android upgrades, plenty of power.  My personal laptop I purchased ahead of it’s class, and I7 with a lot of RAM etc. 4-5 years ago and it’s made it to this day.  (It still is ahead of most specs for an average computer but Microsoft won’t support a Windows 8 or 10 upgrade on it and I can’t divine the reason from their install failure logs).  Windows 7 is good until 2020 anyway.

For me, the threshold for support depends on the object.  Musical equipment — like guitars and amps — last forever and hold their value and you should be able to buy parts for 50 years minimum.  Electronic?  Well, analog equipment is always in and uses universal parts, so why not at least 20 years.  Five years is ridiculous not to have spare parts, ten years too.  Especially for a company flagship model (as was this Helix model).  Computers?  Many people I know are running cheap Lenovos and Toshibas that are years old — Windows 7 or Vista, and it does what they want.  A 7-10 year life span isn’t unreasonable these days.  There are Linux distros made just to refresh old computers.  What about Macs?  The last one I had died over and over so it wasn’t worth it, but I can still get parts although at a high cost, but still get parts. Probably not for an old Apple IIe but an Apple IIe has no use that I can think of other than the novel.

Also there’s the usability need. A Mixing board is taking into environment that needs spare parts — shows, carrying around. IPhones need new screens a lot as they tend to crack, a lot.  My sister though complained her Lenovo broke — power cord, keyboard — but I noted she carried it everywhere for three years and a non-business class computer doesn’t come with the durability built into it.

What about software languages?

I often wonder who was in a room when someone picks Go as the main language for a company.   Where does this come from?  Or Clojure.   Or even these days Ruby or Groovy each with Rails.  The question, as a developer who has worked on mostly other peoples’ projects — where the thought of support, finding developers to work on the project, and forethought were considered beforehand or accepted as a cost of business.

Agree that diversity pushes the industry, and certainly choosing Java/Oracle is not the path for all software.  Also, that unforseen requirements (scalability with php, or the safety of compile time checks vs using all javascript) can come into play down the road.  Or, how often do (shudder) proof of concept projects become *actual* source code?

I would argue that most of the time your software is only as good as its support.

And just throw that out there hoping you don’t have to go looking for any knobs for your current application.  It can be costly and frustrating.

knobs

Tony Caponi

About 10 years ago I made a mistake I have regretted to this day. I turned down a minimum wage paying job working for Tony Caponi.

Anthony Caponi was an artist.  He passed away this last fall 2015.  He was in his 90’s, humble, and accomplished.  As an Italian immigrant who was a boy under Mussolini, and a veteran of the United States military; an art chair at Macalaster College  — and an accomplished sculptor.  His underlying theme was nature, but nature beyond the material and if you go to the Caponi Art Park  you will experience much of his vision and spirituality.

Around 2005  my sister was working in a promoter capacity at the Caponi Art Park.  She is in art business and works mostly for non-profits and independent artists.  So, with that, I was one of her main go-to volunteers for manual labor for the events at the park.  Moving chairs, tables, setting up the sound for plays, the electric, drinks, parking.  All that.  And during this time I got to meet Tony Caponi and make his sculpture part of my time in his park.  His works are built into the landscape.

I really liked volunteering there.  I got to see a lot of wonderful shows, like Shakespeare in the Park performances, avant-garde music, children’s activities and Elizabethan Festivals.  I learned that you do NOT call Elizabethans “Renaissance” people or they will get very angry at you.  I learned how to operate golf carts in strange landscapes.  And I continue to go to some of their events in Eagan, Minnesota making the 2 hour drive from Eau Claire these days since I moved from St. Paul.

During this time I was transforming my business, when my sister said Tony needed help.  It would have been working closely under his direction driving a bobcat, using my muscles and helping him build his craft. The decision weighed heavily on me because I needed some cash flow, but the art helper job paid very little, and I also needed time for my business and software.  But I knew I was passing a good chance up.  In the end I said no with many, many regrets.

Imagine from a design aspect what I could have learned from a famous, established artist.  And from working with such a great man with great vision, in an organization with great people like his wife and my sister.  In this video you can see a bit of what he accomplished.

Design.  Tony was hands-on.  Not some person staring into the sky, but hands-on.  There is so much to learn from doing and he helped me understand that.

In college I got to work around a very famous physicist named James Cronin as his class lab preparation technician.  He won a Nobel.  I think about my time for those few quarters alone with him and our small discussions and how much I learned about his attitude and humility and vision. Being around Tony was like that; you weren’t awed, you weren’t belittled; their existence wasn’t predicated on any sort of such ego.  They were forces and you learned because they did, and you did.

There’s not really a lesson here, just some stories.  In my perspective there are some chances you have to take; jobs out of reach to experience interviewing for even if you do not get them, not being afraid.  There’s always more than one right way to get things done and plenty of room in this world for everybody. Doing a good job, not half-assed, if possible and being responsible.  Getting things done enough.  And I tried myself to be smart and learn from people, and to listen.

Do you like design?  Then go do some.

Do you like to code?  Go code.

It is too bad our industry does not appreciate accomplished, seasoned hands-on.  I guess it doesn’t have the metrics to show this matters; but onsite we know it because of the fewer mistakes made and greater successes.  It also makes me cringe to see up and coming developers thrown into hard situations like quarterbacks too young to start in the NFL.  We have a great industry and should enjoy it more.  I certainly do, by choice.

“. . . a child learns more from what he does than what he hears, more from demonstrated behavior than what he is told.” – Anthony Caponi, Meaning Beyond Reason

JSTL time travel resolution

Your variable may be assigned anything at any time if you know the secret to time travel.

Working on a ticket to address a wrong form action url being used for a different page condition, I ran across a curiosity in a jstl tag file.  It seems that the value for <c:url> was not resolving in a cascading manner as had been the intended behavior of the initial author.

The bit of code I was working on handled mapping search form that either used one of two user selectable conditions:

  1. Address/zip entered by user, which would call endpoint /mapping (and separate java controller method getMapping).
  2.  Or browser location (latitude/longitude), should you as a user allow it. This would call endpoint /mapping/location (and java controller method getLocation).

The defect: only /mapping was being used in both cases, so if a lat/long was passed getMapping would break.

The initial tag code looked like this:

<c:url value="/mapping" var="search" />
<form:form action="${mapping}" method="get" id="results" commandName="MapForm">
  <c:if test="${URL eq 'Mapping' }">
    <input type="hidden" value="${q}" name="q" />
    <c:url value="/mapping" var="search" />
  </c:if>
  <c:if test="${URL eq 'Location' }">
    <input type="hidden" value="${latitude}" name="latitude" />
    <input type="hidden" value="${longitude}" name="longitude" />
    <c:url value="/mapping/location" var="search" />
  </c:if>
  <button/>
</form:form>

In the above code the original author had intended the variable search to be assigned a value LATER based on the source of the user’s choice, but the tag never assigns it and the form action is always “/mapping”.  I looked around (rather quickly) for documentation on this and couldn’t find anything other than a few allusions to tag libraries executing in the order they are written.  This makes sense to me, because even in a jsp page compiled to a servlet, if a variable was assigned that’s the value, and if it changes later it’s previous value is not going to change of course in it’s use without time travel.  Now that I think of it, this seems pretty obvious.

My solution was to change the code to this:

<c:url value="/mapping" var="search" />
<c:if test="${URL eq 'Location' }">
    <c:url value="/mapping/location" var="search" />
</c:if>
<form:form action="${mapping}" method="get" id="results" commandName="MapForm">
  <c:if test="${URL eq 'Mapping' }">
    <input type="hidden" value="${q}" name="q" />
  </c:if>
  <c:if test="${URL eq 'Location' }">
    <input type="hidden" value="${latitude}" name="latitude" />
    <input type="hidden" value="${longitude}" name="longitude" />
  </c:if>
  <button/>
</form:form>

You know something?  Now that I look at this I have a feeling this thing wasn’t even tested!  And seriously, who would write:

int i = 4;
int j = i + 2;
i = 17;
print j;

And expect the answer to be 19?

It’s in npm somewhere . . . it must be.

“Package versions? As many as there are stars in the sky.” – Dances with Dependencies

I’m sure many developers have been finding nodejs buried in pretty much everything these days, and that can mean dependency management with npm and package.json.  And more often than not I am finding version collisions and dependency needs unmet in non-trivial and time draining situations.

Working on a large commerce engine, it fell to my shoulders to do a minor version upgrade.

Versioning on some pieces have been very discrete: JDK7 (for now), Ruby 2.2.2; jstl 1.2 libraries and specific jdbc drivers.  The vendor  ignored specifying versions and “get the latest greatest” rules the roost for ruby gem packages and nodejs.  So far the who-cares-what-version gems have not been a problem (but  they *will* be, can almost guarantee it).  Other packages needed for node aren’t specified in package.json, allowing for the pull of the latest-greatest as well.

I have a few setup tasks, then to run  npm install  in the project to pull the node modules, then do gradle build.  We can run it from there.

First the vanilla try: I run npm install, then gradle build — breakage on the build:

Running “sass:dev” (sass) task >> (base: #CE1A2B, light: #e63546, dark: #a11422) isn’t a valid CSS value.

Working with two other team members, I got a hint: we needed to specify a version, decided by magic (trial and error) in package.json, a dependency entry:

"node-sass":"3.2.0"

That fixed some problems with our EXISTING builds; this breakage was intermittent and now in hindsight I am guessing due to a race condition of node-sass dependency pulls; get lucky and the right version wins.  We propagate the fix to the repository.

But for the minor version upgrade I was working on?  Didn’t work.  Running npm install would generate an error:

Cannot download “https://github.com/sass/node-sass/releases/download/v3.2.0/win32-x64-46_binding.node”

Digging through github to see their releases, well, there is no minor version of -46 for node-sass 3.2.0.  The newest version being 3.4.1 right now — but that version (pulling latest-greatest method, our initial methodology problem) which has a -46 breaks the build due to the css error mentioned above.  Another very strange thing is that no matter what version I put in, it asks for the -46 rendition; as if when I specify a version in package.json it only parses teh x.x.x part, but doesn’t allow for me to specify the -xx version part.

Running npm install –dev  was a huge mistake as it pulled the universe down and still broke;  when I went to delete the node_modules directory there were really long file paths that Windows hates to delete.  I had to rename tons of recursive folders to “a”.  Cost me at least an hour to bail out of that route.

So I started to climb through the repository for node-sass up from 3.2.0, and found a 46 release for version 3.3.3.  I tried that, and it worked and built.  I don’t know why, I can’t generate a legible dependency tree or run a dry run with npm-install.

It’s disconcerting though — why 3.3.3 other than it has a -46 rendition?  There’s no vendor explanation. And only a long regression test will say if it really works.  I talked with the local nodejs expert and asked questions about the type of dry-run I’d like to do with npm (from what I understand, and how to get a dependency graph and figure out what version I need — this is the next step to line up with the regression tests.

Specifying package versions seem to have a shortcoming, in my opinion, as well.   This article from nodejitsu does a great job at discussing semantic package naming, but there doesn’t seem to be a way to specify my -46 release and the article kind of confirms that (I tried every syntax under the sun).  Here is the syntax for package.json, my situation concerned with dependencies,  and I tried different things without success.  For instance specifying “~3.2.1” which is an approximate-equivalent provided a successful npm install, but not a successful gradle build.

The npm system isn’t quite as mature as say maven but it is making leaps because people like you and I are coming in with experience on different systems and use cases to contribute.  For now though, I have to dig down into the dependency graphs even further, and better yet, ask the vendor to list versions of packages it needs for its software instead of assuming the latest-greatest will always work.

Large Grails Log File

Where are you tiny tiny huge file?

I was looking at reorganizing my directories for several grails and java projects I am on, and had local branches from our git hub I didn’t want to lose so re-cloning wasn’t an option. Much to my surprise, when I got info on the entire repo directory it was almost 28 gigabytes.

My first check was to see if the .git directories, any project directories (for IntelliJ and Eclipse) and any roaming maven directories mightin’ be largin’ up my projects. No, those were not the cause for concern.

I finally ran a search (find ~ -size +10G) for some huge files and found what I was looking for in the grails /target directory of one of the projects.

Searching through Config.groovy — the log4j config entry didn’t have a file size limit set.  Maybe for a development environment this is OK (um . . . 27 gigs?) but I like to repeat my settings, assigned or not, throughout all the environments to provide visual cues for the next person coming around to support the software.

An answer for managing log4j file size is here in stackoverflow:

You can control this in the log4j DSL in Config.groovy, under the appenders block. The default behaviour is equivalent to an appender definition of

file name:'stacktrace', file:'stacktrace.log`

in prod mode and file:'target/stacktrace.log' in dev mode, you could replace it with e.g.

rollingFile name:'stacktrace', file:'stacktrace.log',
    maxFileSize:'5MB', maxBackupIndex:2

to limit it to 15MB (the active file plus up to two rolled-over backups).

I didn’t see an entry like that in our config.  Also, it may be our servers monitor and parse these files as usually log files off the developers’ machines are not considered “ours.”  This is falling through the cracks and has to be fixed, imho, for the local dev settings.


We are running log4j in our applications, but note that this is considered legacy now and to be entered on our tech backlog to fix.  Logback, and not log4j, is teh mechanism moving forward.  The manual says:

By default logging in Grails 3.0 is handled by the Logback logging framework and can be configured with the grails-app/conf/logback.groovy file.