Context is the new Black

We’ll just ignore I haven’t posted in ages. Nothing to see here.
So…, wow! A.I. is now everywhere you look. It’s getting jammed down our throats at every opportunity. Don’t get me wrong. I’m a big user and have been since it became self aware useful so probably starting around early 2023. Its only been 3 years and the improvements are staggering. I’ve programmed A.I. since 2008 so I think I’m in a fairly good position to judge the state of A.I. and it’s usefulness.

There’s a work transformation happening right now, old methods are rapidly being replaced, new rules being written and being re-written just weeks later. Everything is moving so fast it’s worse than the jokes about new javascript framework release schedule.

What’s current this month, probably wont be current next month.

But the most noticeable change is the roles people will play in this brave new world. Anyone that sits at a keyboard, your role will change dramatically, mostly for the better, with some caveats, and either you’re on board, riding the wave or you’ll be rolled and left drowning in a tsunami of new technologies.

Nowhere is this more rapid than the role of developer. A.I. has brought about a massive change to these roles in a matter of months that raises a lot of still unanswered questions, which I’ll get to in a moment. We’re heading into truly unchartered territory.

Questions, questions, questions

The first hurdle that, as a developer I think you have to climb is that you now have to recognize your workflow is fundamentally changing. The developers that are thriving are the ones that have embraced this new world order. They’re not necessarily the ‘best’ developers anymore, but the best at realising that the role now revolves around organising context, prompt engineering and how to look over what the AI has generated. For now.

And here’s the first of the questions that are arising. When the AI is better at writing code than you are, is it necessary to review the code with a fine toothed comb? Currently, yes, but it won’t be long before that’s an unnecessary time suck. Literally, give it a month. Personally I won’t miss that, but then I’m a lazy developer.

This brings into focus a second question. Since the model knows all the languages better than you, you can choose to create the application in the language best suited to the application, rather than the language you know the best. Want to write a mobile app? Ignore all the cross platform fluff & go native. Choose Swift for Apple and Java for Android without knowing either. Yes, we’ve tried that too.

So, third question, if you choose a language that you don’t know, how will you review the code effectively? What will it mean to enforce coding standards if no one is going to read the code?

And, if you choose a language that you do know, have you chosen a sub optimal solution (according to the model), effectively nobbling the application just so you can review the code?

A fourth, a far reaching related question. What does it mean now to learn a new development skill?

I used to want to be awesome at C# & .net coding, staying up to date with the latest language features, frameworks & tools. I thought that would bring me higher salaries and make me more ‘desirable’ on the employment market. But, is this really necessary any more beyond what I already know? Is there still any benefit knowing the difference between covariance and contravariance, or being able to apply S.O.L.I.D. principles, or even demonstrate this level of knowledge?

It used to be that technical expertise played a large part in securing a well paid role. What if that doesn’t matter as much? Will this be now measured on your English skills, how well you write a PRD, or how well you can prompt an AI compared to the other job candidates?

Although 47% of developers use AI daily according to the last Stack Overflow 2025 survey there are over half that use it weekly or less and 16% that don’t plan to use it at all. Is this a mistake? Some employers have mandated A.I. usage, so where will that leave developers that refuse to use A.I. ?

A story that leads to an application & a serious test.

Back in 2013 I was diagnosed with Relapsing remitting Multiple Sclerosis. I started with numb legs, I struggled to walk, had terrible balance and lost my eyesight to the point I was effectively blind.

On visiting the hospital, I was seen by Professor Basil Sharrack where I volunteered for a 10y experimental drug trial. This was deemed to have failed in 2016 as I was still having relapses , so I was offered a pioneering stem cell transplant as treatment. I was patient #16 (in the world!) to have this treatment for MS at the Hallamshire hospital and was told that at the rate of my disease progression I would be in a wheel chair within 5 years, if I didn’t do something.

Of course I said yes. Three months in hospital, intense chemo-therapy and one pluripotent stem cell transplant later, and I’ve had no relapses since. Its been effectively a cure for a currently incurable disease. Go Science!. Although, Chemo – 1 Star – Would NOT buy again.

However, my eyesight still had minor issues. The optic neuritis I suffered in 2013 had damaged my retina where I have a remaining patch of missing vision. Its a thumbnail sized patch at arms length in my peripheral vision, so not a great loss, and it causes no problems day to day.

When I was recovering the first time, my sight was like looking through a leopard print pattern. Not that you notice because your brain fills in the missing pieces of vision so you don’t realise you’re missing anything. This is known as “perceptual completion”, it’s the reason you don’t have a “hole” at the point the optic nerve goes exits your retina – the “optic disc”. Everyone has this blind spot. I thought, wonder if I could map my missing vision, and track the changes as it heals?

Cue the Meyesight app.

When I want to learn something new, I find its best to have a ‘meaty’ application to play with. I find the typical ToDo list, twee-apps or single page blog post explainers a waste of time.

So, of course being a developer at heart I wrote an app to ‘map’ my missing vision. Enter Meyesight. This has been my go to app to re-write when I want to learn anything new. New framework? Rewrite Meyesight. New interesting language? Rewrite Meyesight.

The app itself consisted of the typical full stack, front end, back end & database, so I could use the language & associated framework as a learning platform.

It consists of a vision mapping screen where a cross is placed in the centre of the screen as a focal point. Whilst you concentrate on the cross with one eye, a dot scans the screen, left to right, top to bottom, and when the dot disappears you press the space bar to record the missing vision location, releasing the space bar when the dot re-appears. Do this for both eyes and you effectively have a ‘map’ of your visual field.

Meyesight Dashboard:

Introducing A.I. to the app.

Already having a deep love of A.I & data science, I was looking for some way of adding an A.I. feature to a) make the app more useful and b) make it more complex to implement, since re-implementations were really just a language mapping exercises, and while loops look the same in any language.

After searching around I came across this paper on using neural networks, specifically a Gradient Class Activation Mapping network for medical imaging.

Thinking this might have some utility for mapping the changes in the missing regions of sight over time in the same way the paper authors used it to map changes in tumour growth, I added a Grad-CAM neural network to help map any disease progression. I reasoned that the visual field patches changed in similar ways to how the tumours in their sample medical images changed, so it might be useful. Even if it proved not to be useful medically, it was definitely useful as a learning exercise, which was the main point after all!

Originally written from scratch in python, it took me nearly 3 months just for the initial implementation. So, in total the first incarnation of the Meyesight App to include A.I. took about 5 months. This will be relevant later.

Example heatmap:

In total, the app’s been re-written 4 times since 2014 – WPF, C# + Asp.Net MVC, Elixir + Phoenix, finally React. The addition of the neural network. was around 2018. Each re-write before this had taken at least 3 months. Remember speed wasn’t the point here, its a learning exercise.

Agentic coding has joined the chat.

Now, lets be perfectly clear, I don’t want to give up coding. I’ve been coding since 1981, starting with writing commercial games in 6502. I have history. A LOT of history. So I fully understand where the 16% of Stack overflow responses come from. I have forty years of hard won knowledge that I’d like to preserve thankyou very much. However, I also realise times change. A co-worker put it best in a recent meeting when told we were to use A.I. for development when he stated “Why? Am I not good enough?”. I think that sentiment rings loud. He’s probably the best developer I know. He’s already 10x, even without A.I.

Warning : skill deprecation approaching.

First, I’ll use the example of originally writing a games 3d rendering using the 80×86 CPUs to do polygon fills, way back in the days when we wrote games mostly in x86 assembly. GPUs on video cards weren’t around, and the Voodoo card wasn’t even a twinkle in 3Dfx’s eye until 1996. We used to optimise the code for the polygon fills using tools like Intels V-Tune to wring out every cycle the CPU could muster. Remember, CPU speeds were measured in Mhz – the 16bit 8086 (1978) was just 5Mhz & even the 32bit 80386 was clocked at a measly 12Mhz. Getting speed out of these for polygon filling was a hard won skill and gave any brave developer PTSD for years. Performance Tuning Stress Disorder is a real thing.

Then came along the C compilers. Suddenly optimisers on compilers like Wacom C used to do CPU pipeline optimisations for us. Boom that skill was now redundant.

We used to write our own memory management systems in assembly. Then came C & C++ with their fancy malloc and new. Boom, that skill was redundant. We used to be responsible for carefully managing the allocation & de-allocations in C & C++ constructors & destructors, then came Managed code like C#. Boom, that skill was now redundant.

There’s a pattern emerging here.

Developer knowledge gets deprecated when new technologies appear. No big deal, there’s always the new technology to become competent with, and loss of obsolete skills is inevitable. But the underlying knowledge of how memory allocation worked is still useful, because even though C# manages the memory, you’ll still need to understand what’s going on inside if you ever need to ‘tune’ what’s happening or fix leaks. Yes, it still leaks.

It wasn’t just computers, the same happened in other areas. Take Rally cars for example. I had a Subaru WRX STi TypeRA and later a Mitsubishi EVO FQ330 around the time Richard Burns and Petter Solberg were WRC drivers for Subaru. In the mid 90’s the Impreza WRX STI’s Driver Controlled Centre Differential (DCCD) system was upgraded with a yaw-rate sensor to automatically adjust torque distribution, making the STI “push harder” through turns without demanding as much expertise from the driver. The Lancer EVO IV introduced the Mitsubishi Active Yaw Control (AYC) system, developed from rally experience to actively manage the car’s rotation in corners. Mitsubishi said – AYC balances the lateral forces between front and rear to “maximize cornering performance,” enhancing or correcting understeer and oversteer as needed.

Boom, an average driver like me could suddenly drive a little more like Petter Solberg and get way better track times. They’d effectively “put the rally drivers knowledge into the cars driver aids.”

Ok, AYC didn’t quite save me from aquaplaning on all 4 wheels. Generally however, driving aids make the cars safer, but also lower the skill level required to achieve better results on track days.

These are all examples of the democratisation of a complex skill, lowering the barrier to entry, so effectively lowering the bar.

But this time, A.I. & LLMs are different.

Lets go back for another driving example. Imagine if Mercedes took Lewis Hamilton to one side and said, “hey Lewis – we’ve put an AI into your Mercedes F1 car. It’s now autonomous and will drive round any track faster than you. Thing is, we haven’t given it vision yet, so you need to sit in the drivers seat and describe the track to the car”.

I’m guessing Lewis lives for driving. You don’t get to that level by eating donuts and watching re-runs of Love Island. But you’ve removed that fundamental reason Lewis loves driving. He’s spent many years honing that skillset to an extreme level. Hello, role insecurity, loss of identity, anxiety and reduced self esteem. “Lewis, you’re currently the #1 track describer in Formula 1!” Kind of loses the prestige a bit, but it would mean I’m in with a chance at competing against Lewis, as would anyone else that can describe the formula 1 tracks in detail. See what’s happening here?

A.I. isn’t just deprecating the last technology like before. Replacing the old tech with something new yet still technical that requires skill to learn & understand, its effectively removing the need for that specialisation, that level of understanding. Mercedes wouldn’t need Lewis when any of the pit hands can also describe the track in equal detail.

I think this will also happen to developers, or any tech person for that matter, especially those that have dedicated most of their working lives to their craft. Like me, and many others from my era. Homogenized skill sets are on the horizon.

Just like Lewis probably lives for driving, a developers’ identity is tied to problem-solving. If AI shifts them into passive roles, that can erode purpose and confidence. I’m not sure everyone will enjoy the shift from from creator & problem solver to overseer. That’s really a profound mental shift.

The Mercedes example is a direct analogy of what companies are expecting of developers right now. Well, almost. As professionals, we’re still expected to ‘review’ the code A.I. produces, in the same way Mercedes would want Lewis to describe the track. Maybe have a second person do a Track description PR and check his description’s good enough to win. But think back 6 months ago, AI could barely produce code that compiled. Give it another 6 months, we won’t need to review what it produces either.

Personally, I think we’re probably already there. I’ll show you why shortly.

Voices in the wild.

It’s not just me thinking about the erosion of skills and lower perceived competency required for complex cerebral work.

In an article from FastCompany, analysts have observed that AI “empowers us to do things that once required years of training by democratising skills across the workforce.”

In another article from Maggie Smith of the North Carolina Department of Commerce, she notes “that generative AI can boost productivity and ‘democratization of skills,’ allowing workers to accomplish complex tasks without deep specialized expertise”. Figure 1 of her article below, shows the white-collar jobs exposed to A.I.:

Notice the top half? This is validated by Geoffrey Hinton, the Godfather of A.I., in a recent interview on Steven Bartlett’s podcast Diary of a CEO when asked

SB: “What would you say to people about their career prospects in a world of super intelligence?”

GH: “Learn to be a plumber.”

Geoffrey talking about a move from the top half to the bottom half, this is borne out by Maggie Smith’s article’s figure above. Here’s another paper from Microsoft Research, Measuring the Occupational Implications of Generative AI where we can see a similar story emerge.

Physical jobs are safest for now. But then China has just hosted the Humanoid Games where the over 500 robots from 16 countries competed against each other for speed, agility & endurance. One highlight – the Unitree H1 ran the 1500m faster than humans, setting a new, well, artificial world record. Yes, there’s a lot of comedy fails but we’re at the very beginning of humanoid robotics. Do we need a new category for the Olympics for when things improve?

I remember seeing a tweet where someone said “I want the robots to to do the housework and the laundry, not create art and music”. At least the housework looks like that’s covered.

Back to Meysight.

To demonstrate why I think we’re almost there & not actually having to review code, I chose another learning exercise. I re-wrote Meyesight, but going 100% all in on AI. I thought “how far can I push Claude here” i.e.

I gave up coding on this project to see if it would work & what I felt like after the experience.

I purposely wanted to see if I would have to write any code, any tests, or fix any bugs.

I spent about 6 hours working with ChatGPT5 then Anthropic’s Claude, writing a really comprehensive Product Requirements Document to hand to GitHub’s Copilot in VSCode. I described the application in as much detail as possible. I asked Chat GPT 5 what was missing and if it could think of any features I hadn’t described adequately for implementation by Copilot. I went round in circles for hours, answering CGPT5’s & Claudes questions, editing the document, asking if it was ready for implementation again, and again, and again.

Eventually, passing the Chat GPT document to Copilot ( & iterating it further with Claude Sonnet ), it said the document was good to go :

There you have it. “The PRD is excellent and ready for implementation“.

Point to note, Claude Sonnet was way better than ChatGPT at seeing the missing pieces of the PRD to actually produce an implementable document. CGPT left a lot of holes but thought it was complete.

So, I attended to the 4 minor recommendations and told Copilot to add Tensorflow, do the migrations and implement any specified components and utilities. The test was to see if I could stay 100% in the Copilot chat window, and never touch the editor side of VSCode.

Spoiler: I did.

Not only did it implement the Database schema using migrations for Supabase, it also wrote the entire Grad-CAM neural network in Javascript, alongside the entire front end in React using Tailwind for the CSS & layouts. It designed the screen layouts and even added features I didn’t request like little images on the lists :

I didn’t ask for the any of those little icons. Or the thumbnail. Or the list layout, or the Analysis status. The model really went over and above what I’d asked for, and completely nailed the layout. All the screen shots above we entirely Claude generated layouts. More importantly, not a single compilation error, or non-working page. Claude Sonnet 4 is damned impressive. If this had been any of the OpenAI models I doubt it would have even compiled. Its important to note that I think the only reason it did such a good job at implementation is because the initial PRD was so detailed, it was practically a rewrite to start with.

There were a couple of minor issues on the calibration page, where the app measures your reaction time (hence the 266ms & 364ms labels above) so that it can semi-reliably offset the scan tracks when the dot disappears offsetting when you press the space bar to record. But they were truly superficial, and I still never looked at the code. I described the problem in as much detail as possible and let it find the problem & fix the code. I think it took 4 or 5 attempts to get it working how I wanted.

Do I need to look at the code? I don’t think so. It works 100% – it also did a fantastic job of creating comprehensive unit tests, validating that the code works at a fundamental level.:

Would I trust it to write a Netflix scale app. No of course not. Small scale commercial code? Yes. Test apps & POC work. Absolutely.

Turns out I’m not the only person that’s gone 100% hands off, accepting what the LLM generates. Mirek Stanek of Papaya Global has just done the same thing. Using AI to generate 100k lines of code for a production level app. He also explains they there’s a real danger of companies not capitalising on this shift. Smart guy.

At the current rate of progress, give it a year or two before the other scale apps are 100% AI generated, especially since AlphaGo have already achieved what was on the AI-2027 roadmap for June 2028 almost 3 years ahead of that schedule. Don’t read either of those if thinking about Skynet gives you anxiety. I’m serious. Don’t do it.

The TL;DRl of the AlphaGo paper is that they’ve created an AI that’s better at writing AI than humans. AI is no longer constrained by human thought, but only by compute power and electricity. And we know how that can scale.

So, why is context the new black?

Back to the title of this article. Its taken a while, We’ve laid the groundwork, but we’ve got here so thanks for making it this far!

CONTEXT IS EVERYTHING. ORGANISE IT LIKE YOUR LIFE – AND YOUR JOB – DEPENDS ON IT.

If I could put that in a larger font I would. Spending an entire day on the Meyesight PRD is the only reason it was implemented without a hitch.

How did I feel?

Not as bad as I expected, though this may be the honeymoon-amazement-that-it-actually-worked period. I dislike web development, so having something else worry about CSS and layout is something I can totally get behind!

Was I bored? YES, very.

Did I have faith it would work? NO.

But then I’d only really experienced OpenAI’s models, or Qwen Coder models running locally, using code fragments & cut-n-paste, so nothing this complex to measure against.

Everything the model does is only as good as the context you give it. If it makes an error, its because your context wasn’t good enough. Remember this thing can now code better than you, faster than you, and anyone you know, like I said if you don’t think that’s the case yet, give it a few months. Meyesight has already proven this for me. But, YMMV.

Realize that context management is now your CORE ENGINEERING SKILL. Not coding, not architecture, not testing, not design, not UX, not anything specific. The model will replace all these given the correct context. I have receipts. Poor old Stack Overflow is living proof of this shift happening in real time – the number of questions asked is now at 2009 levels and still trending downwards. In his blog article on The decline of Stack Overflow. Eric Holscher points out the effect that Chat GPT has had on its decline since 2022. It’s a little ironic that he sites there’s still need for a canonical resource for reference information, but then as the creator of Read The Docs he has a vested interest in maintaining the status quo.

Sorry, Eric I think you’re next.

Of course there’s still the requirement that a developer / architect / designer / tech person, works with the LLM since they’ll have all the requisite knowledge about what they want the LLM to create. Context creation still needs a driver. Mercedes will still need a Lewis. Having a skilled driver is still important.

As a developer, the biggest blocker was your typing speed. Now the biggest blocker is your ability to give clear, concise, explicit requirements to the model. If anything it’s become more cerebral, not less. Awesome-Copilot is a github repository brimming with pre built context documents for various languages and chatmodes to configure the agents. Its a good starting point to get a feel for required context building.

So, CONTEXT IS EVERYTHING. I think I mentioned that in passing.

Beneficial or not?

Is the A.I. change beneficial to us developers? Yes, I think so. It’s multiplied my work capacity ten-fold. And that’s great, but here’s where the managers and companies need to listen up, it doesn’t mean features can be produced 10 times faster. For that, every part of the pipeline has to move at 10x the rate. And that’s just not feasible.

As a single developer, working in isolation, yes, you can go 10x. Implementing Meyesight, took ~5 months before, now that’s shrunk down to 1-2 days with the caveat that I already had an intimate understanding of what was required having done this 4 times before. If I was generating the PRD from scratch, I think a week would have been adequate. If others were involved then everyone would have to move at the same pace, and as everyone knows adding people to a team doesn’t necessarily increase velocity as the lines of communication increase. Plus, the power dimension is always vertical and a choke point. Well orchestrated, flat hierarchical , diverse, generalist teams will be the ones that benefit the most from A.I. but this is the only pipeline arrangement that can move in multiples.

Amish agents!

To truly get the whole 10x productivity, the typical model of the Amish barn-raising is a good example to emulate. Check the Harrison Ford movie Witness, where the Amish raise a barn in a single day. Its remarkable that there were no managers, no team leads, no leaders leading the leadership. No debate over the design, no discussion on who does what. Just perfect harmony until the job is done. This is what I experienced working with the A.I. agents re-creating Meyesight. Although I had the initial requirement, I asked it what was missing, what could be better, what should be left out, what screens we’d need. I deferred ownership and we worked together, in equal partnership until the task was complete. And it worked perfectly. This was literally the definition of Utopian Teamwork. An Amish barn raising on my desktop. Me totally embracing A.I.

It took a lot to cope with the realisation and defer to the agent because most of the time, it really does know better. Again, if you think its not there yet, give it a couple of months.

Job safety

Here’s a quote from Scott Hanselman in a fireside chat, “A.I. won’t make you redundant, corporate greed will.”. I think he’s right and there are many purists arguing that democratization of expertise will commodify work making outputs interchangeable and devoid of personal touch. For example in a recent DevRev article – Preserving craft in the era of AI, they explain in detail how the industrial revolution replaced artisans with standardised production. Maybe artisanal code will come round to be a selling point, like hand built cars or watches. “Coded by humans” might be as sought after as an Aston Martin or a Breitling Navitimer. I know where there are at least 16% of developers down with that.

Personally, I’m very grateful to work with a small team of developers for a company that recognises that A.I. is a productivity boost for developers, not a replacement. As Scott said in the same podcast, “Humans are the force multiplier of A.I.“, I think our company recognises that.

Long may that continue.

I hope you got some value out of this article, all comments welcome. Thanks for reading, any questions, please reach out.

PS. Bonus points if you recognise the equations in the title image 😉 NO FEEDING IT TO CGPT!

PPS. 0% of this article is written by AI. That’s why its full of bad grammar, spelling mistakes and missing punctuation. No A.I. is that bad. But I AM, and I’ll own it.

Now, a polite request, can you help me?

My wife is wanting to change careers to become a QA. She’s done a year of voluntary testing, mostly manual testing with some entry level automation using Postman for API tests. She also used Azure Devops TestPlans, Confluence and other tools relevant to the role.

She’s also currently working towards her ISTQB. And because I’m all in on A.I. I’ve taught her how to 10x her test generation using VSCode & Playwright MCP. She’s A.I. & automation ready!

She’s happy to join at the very, very bottom, in an entry level position at a junior level or even apprentice if it gets her into the industry. Ideally remote or hybrid with a local commute. We realize just how scarce these roles are. She also has me behind her, mentoring her & keeping her ready for her new career path with all the latest advances!

This is her portfolio

Posted in Uncategorized | Leave a comment

Raygun deployments & Azure Dev Ops

Unfortunately, it appears that Raygun does not have a ready built deployment integration for ADO. It looks like there are a few supported tools listed here: Supported Deployment Tools, but sadly, no love for ADO. What is this, like 2013?

Anyway, not to worry, Raygun helpfully provides a simple API that we can use to submit a deployment when a release pipeline runs in ADO.

Authorising Access to Raygun

First step is to get an API key from Raygun. Click on your user in the top right and go to My Settings.

Scroll down to the External Access Token field & copy it for later.

Next select your application from the dropdown in the top left and go to Application Settings ->General and save the API key from the Application Settings box

Creating the ADO PowerShell release step.

It’s as simple as going to your project’s Releases menu item and selecting the View Stage Tasks link in your selected stage.

Next, add a PowerShell task. Click the PLUS sign next to the Agent Job.

Choose the PowerShell task from the Add Tasks pane.

Next, set up the task with the PowerShell script to perform a REST POST to the Raygun API.

I’ve chosen some ADO pipeline variables relating to our deployment process, but there are many variables to choose from in the classic release and artefact variable list if these aren’t relevant to your deployments.

It’s worthwhile testing whether the posting works by running this task as a separate PowerShell test script to make sure you’ve got the API keys set up correctly.

Posted in Uncategorized | Leave a comment

Azure lights and Brothel Mode™

A long, long time ago, I was watching a Scott Hanselman developer video and he had L.E.D lighting around his office ceiling.

I thought that looked really cool, so like a nerdy little fanboy off I went and bought two sets of GoVee RGBIC LED lighting strips and 25m of conduit to hold them.

I chose GoVee because their strips are wi-fi connected (in addition to Bluetooth), the iPhone app is great and GoVee also has a developer portal so you can issue API calls to control your lights! They also work with Alexa which is another bonus.

I spent about and hour screwing the conduit under the coving that goes around my office ceiling and stuck up the 20m roll of RBIC Led’s.

I was a bit annoyed that the conduit didn’t soften the LEDs into a more solid light bar given the conduit covers were frosted not transparent, but its an acceptable glow.

Next I added the 5m roll under the edge of my desk, for that Fast n Furious neon look!

End result :

Build success!

I also took the step of signing up to the GoVee API so I could issue HTTP requests to control my lights.

Then, I created Postman tests to change the colours.

Adding these HTTP requests to my personal Azure Dev Ops build piplines to change the colours on a build failure, results in this :

Build failed!

On showing this to one of my work colleagues – Mark Robinson – jokingly named it “Brothel Mode™”, which has stuck making me rename all the YAML tasks in the ADO pipeline to “EnableBrothelMode” when a build fails!

GoVee API use & YAML Pipelines

The GoVee developer site is great, good documentation and their API is super easy to use :

  1. Sign up, get an API key
    They’ll want to know what you’re using it for but integration with ADO seem acceptable!
  2. Add an authorisation header with a key value pair :
    key : Govee-API-Key
    Value: <Your api key guid from Govee>
  3. Create a queries to control your lights!
    Send a GET request to

    http://developer-api.govee.com/v1/devices

    to get a list of all your devices, their capabilities and the device ID’s required to issue commands.
  4. Send a PUT request to change the light state!
    Include the Authorisation header containing your API key and send a request to

    http://developer-api.govee.com/v1/devices/control

    including a JSON body with the command parameters. For example, change the lights to red :
{
   "device": "XX:XX:XX:XX:XX:XX:XX:XX",
   "model": "H6159",
   "cmd": {
      "name": "color",
      "value": {
         "r": 255,
         "g": 0,
         "b": 0
      }
   }
}

Control with Azure DevOps

Controlling the lights from an ADO pipeline is also pretty easy once you got the YAML tasks figured out – aka spending a few excruciating hours fiddling with spaces to get it all lined up correctly……stupid yaml spacing nonsense……

- stage: SetBuildLightsToFailure        
  dependsOn: SetupYARNAndBuild
  condition: failed()
  jobs:
  - job: EnableBrothelMode
    steps:
    - task: restCallBuildTask@0
      displayName: Enable Brothel Mode.
      inputs:
        webserviceEndpoint: 'Govee Azure Lights'
        relativeUrl: 
        httpVerb: 'PUT'
        body: |
          {
              "device": "XX:XX:XX:XX:XX:XX:XX:XX",
              "model": "H6159",
              "cmd": {
                  "name": "color",
                  "value": {
                      "r": 255,
                      "g": 0,
                      "b": 0
                  }
              }
          }
        contentType: 'application/json'
        headers: '{"Govee-API-Key":"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"}'
        allowInvalidSSLCertificate: false

The thing about your entire office glowing read apart from random callers thinking its a room full of cheap hookers, is its an incentive to fix the build really quickly. No one likes sitting in a flaming-hell-box-of-crushed-dreams-and-broken-code.

All in all, a great addition to brighten the office and when there’s no one in the house I turn up the amp, line up some Justin Bieber, and change the lights to disco mode.

#LIVINGMYBESTLIFE

Posted in Uncategorized | Leave a comment

Hollister – quality clothes, shit I.T.

OK, seems like another company takes the line of “change your email to use our service”.

I purchased a nice hooded top in store, and signed up to the news letter at the PoS terminal with my hills.house email address. I half expected it to reject is as invalid, but it all went through just fine.

Fast forward a few weeks. I receive an email with some heavily discounted items, so I duly ordered £60’s worth of t-shirts and hoodies.

The final page was “login and see your accumulated points” or something similar.

I thought well, I haven’t created an account other than on the PoS terminal, but that didn’t ask me for a password, so I’ll login or change my password on the web.

Bzzzzzzt. No, no I won’t. Or rather I can’t because the website’s email validation is just as broken as British Telecoms.

I thought it was getting better, I’d only had a couple of these, Yorkshire Bank (fixed) and B.T (won’t fix), and now we can add Hollister to the shit list.

I contacted customer support, and they responded with the typical canned helpdesk response of “have you tried clearing your cookies or using a different browser?”

No, because I know that won’t solve your shitty email validation problem.

So I responded politely a second time guiding them to the previous blog-post B.T.’s inability to fix email. Poor but unsurprising. has some info on what’s wrong and how to fix it.

Next I received a nice response from Linda at Hollister customer support:

“Unfortunately, like your bank, the solution is to use an email address with a common domain.”

Nope, not good enough, another case of “we can’t fix it, change your email”.

Top marks Hollister, top marks.

Posted in Uncategorized | Leave a comment

FleetWave Lite, first product release!

For full release details and a free trial, go to the Fleetwave Landing Page: https://www2.chevinfleet.com/introducing-fleetwave-lite

How did I get here?

FleetWave Lite is the product I’ve been working on for the past 18 months, converting an ASP.Net monolithic application that had grown organically over the previous 25 years into a cloud-native SaaS application. I’ll say it’s been difficult.

We had no usage metrics, no idea how many concurrent users we’d have to support, and very  little idea of data sizes because the product’s sales strategy was essentially fire and forget.

We didn’t have access to any of these parameters because the product was installed on the client’s premises in all cases.

As the application’s architect, I was concerned about these issues. To be fair, not all clients were opaque, but the specifics were murky at best.

In order to reduce risk, I over-architected some Azure services, but they are auto-scaled, so the application should be able to handle increased traffic regardless.

Database connectivity, notifications, client onboarding, caching, session state, auth*, and other features had to be modified from the monolith version. All of this just doesn’t occur to the developer when it’s just one application with everything held in memory; yes, even the session state was InProc, that’s how old-school this codebase was.

All FleetWave code changes were made by three of Chevin’s developers while I architected the code changes for the move to Azure. I did manage to complete some code for the migration in the form of a client onboarding tool written in React with a C# WebAPI for the interface with Azure, which I enjoyed but confirmed my belief that I will never, ever become a web developer!

The next issue was our outdated tooling, which also needed to be updated.

Hello, Azure Dev Ops

We needed to migrate not only a 25-year-old monolith, but also to a development system capable of supporting our final target in order to implement modern development techniques.

Yes, our previous Jira and Bamboo tooling allowed CI and CD, but they didn’t really support our desired target state. Because we’re a Microsoft house, the decision was simple: Azure Dev Ops. So I began the migration for our entire toolchain to A.D.O. in conjunction with setting up the whole Azure subscription & everything else.

I’d say I’d bitten off more than I can chew, but everyone says I have a big mouth, so I just plouged ahead with my master plan, ignoring the nay-sayers, confident that I could achieve the end goal in the time we’d been given.

It’s amazing how much better this is than Bamboo. The features and user-friendliness are excellent. The move from the Atlassian tool chain was a breeze for us, given that we’re almost entirely Microsoft and Windows-based.

Deadline met

After just over 13 months, we’ve achieved our goal of migrating a 25 year old monolith written in ASP.Net Web forms to be a nice shiny new Azure SaaS application!

So well done Karl Gibson, Neil Robinson and Paul Muir. With front end refactoring from Daniel Joseph and Bailey Clarke helping out with the testing.

NextGen here we go!

FleetWave NextGen is the next project on the horizon. C# WebAPI 2.0, is used in conjunction with D.A.P.R. and all hosted in Azure Kubernetes Service with an attractive React Native front end, so that we can approach this product from a mobile first perspective.

I’ll post more about this as it unfolds!

Posted in Chevin | Leave a comment

B.T.’s inability to fix email. Poor but unsurprising.

So, first thing, a pet hate. The number of times I’m signing up for something and the email address validation doesn’t accept my perfectly valid address because of the .house top-level domain. And, we’re not talking small fry here. Everything from my bank (now fixed due to my persistence!), to B.T. – yes B-friggin’-T, the largest telecoms provider in the U.K.

Almost makes me want to never sign up with a proper email address and use 10 Minute Email forevermore.

It’s not like B.T. doesn’t have enough money to pay for decent developers to work on their shitty billing website. But no. Even after ringing them and complaining, the response I got was worse than their developers’ not being able to code a validation regex correctly.

“Its your problem, chage your email address.”

Helpful lady on the B.T. Broadband billing help line.

You’re kidding me, right? I pay for my personalized family domain name that I’ve had for nearly 8 years, and you want me to change it so I can use your website? There’s something wrong with this customer service picture.

The .house top-level domain has been around for ~9 years now.

Domains that don’t end in .COM, .ORG, .NET & .CO.UK have been around a long, long time, in fact, there are over 1500 top-level domains, so that’s probably 1,496 TLD’s your shitty validation won’t cope with.

And yet so many sites and mobile applications don’t accept perfectly valid domain names as part of the email address.

So let’s fix that. Here’s a simple bit of C# to download the list of TLD names from IANA – the Internet Assigned Numbers Authority – and then use a fairly well-tested regex to validate both the email address and the top-level domain. It’s literally 20 minutes of effort.

Before you go getting your knickers in a twist, the reg-ex doesn’t cover 100% of all cases but the ones it doesn’t cover really are super edge cases for odd character combinations that you probably can’t type on a real keyboard anyway.

I did come across an article by Haaked that gives a solution, but whilst it copes with most of what’s in the RFC defining how an email address should be formatted, it still doesn’t take into account valid TLDs. Even the mighty Haak can’t 100% fix this, so I feel pretty good with my attempt!

So, yes, your email may be formatted correctly, but if the TLD isn’t valid, it’s goin’ nowhere!

I did also find a javascript package called MailCheck. It does a load of clever sub-domain/TLD checking and some regex validation, but you guessed it, IT STILL DOESN’T VALIDATE THE TLD PROPERLY.

They still have a static list of ‘valid’ TLD’s. Yes, they’ve provided the methods to ‘customise’ the list, but since you can’t make this up as you go along, why not build in getting the list from the horse’s mouth? IANA regularly provides this text list of all the TLDs!

Matching the TLD against a regex that says it has to be alphanumeric might make it syntactically valid, but it’s still wrong, because, well, D-N-S.

To be fair, B.T. probably has decent developers but for whatever reason, this bit of crap code slipped through their water-tight pull request process.

So here’s a little snippet I knocked up to see just how difficult proper TLD validation would be. Hint. Not very.

string[] tldNames;

void Main()
{
	tldNames = FetchIANARootDb();

	ValidateEmail("[email protected]").Dump("[email protected]");
	ValidateEmail("[email protected]").Dump("[email protected]");
	ValidateEmail("[email protected]").Dump("[email protected]");
	ValidateEmail("[email protected]").Dump("[email protected]");
	ValidateEmail("[email protected]").Dump("[email protected]");
	ValidateEmail("[email protected]").Dump("[email protected]");
	ValidateEmail("invalid@[email protected]").Dump("invalid@[email protected]");
	ValidateEmail(@"Fred\ [email protected]").Dump(@"Fred\ [email protected]");
	ValidateEmail(@"""Fred\ Bloggs""@example.com").Dump(@"""Fred\ Bloggs""@example.com");
	ValidateEmail(@"rob@hÔtels.com").Dump(@"rob@hÔtels.com");
	ValidateEmail(@"rob@hÔtels.cÔm").Dump(@"rob@hÔtels.cÔm");
	ValidateEmail(@"[email protected]").Dump(@"[email protected]");
	ValidateEmail(@"rob@uk").Dump(@"rob@uk");

	//Phill Haak email tests.

	"Haaked email test".Dump();
	ValidateEmail(@"Abc\@[email protected]").Dump(@"Abc\@[email protected]");
	ValidateEmail(@"Fred\ [email protected]").Dump(@"Fred\ [email protected]");
	ValidateEmail(@"Joe.\\[email protected]").Dump(@"Joe.\\[email protected]");
	ValidateEmail("\"Abc@def\"@example.com").Dump("\"Abc@def\"@example.com");
	ValidateEmail("\"Fred Bloggs\"@example.com").Dump("\"Fred Bloggs\"@example.com");
	ValidateEmail(@"customer/[email protected]").Dump(@"customer/[email protected]");
	ValidateEmail(@"[email protected]").Dump(@"[email protected]");
	ValidateEmail(@"!def!xyz%[email protected]").Dump(@"!def!xyz%[email protected]");
	ValidateEmail(@"[email protected]").Dump(@"[email protected]");


}


bool ValidateEmail(string email)
{
	string domainName = string.Empty;

	if (string.IsNullOrWhiteSpace(email))
		return false;
	try
	{
		// Normalize the domain
		email = Regex.Replace(email, @"(@)(.+)$", DomainMapper,
							  RegexOptions.None, TimeSpan.FromMilliseconds(200));

		// Examines the domain part of the email and normalizes it.
		string DomainMapper(Match match)
		{
			// IdnMapping class converts Unicode domain names see https://tools.ietf.org/html/rfc3492
			var idn = new IdnMapping();

			// Pull out and process domain name (throws ArgumentException on invalid)
			domainName = idn.GetAscii(match.Groups[2].Value);

			return match.Groups[1].Value + domainName;
		}
	}
	catch (RegexMatchTimeoutException e)
	{
		return false;
	}
	catch (ArgumentException e)
	{
		return false;
	}

	try
	{
		// valid email format regex 
		string stackOFPattern = @"^(?("")("".+?(?<!\\)""@)|(([0-9a-z]((\.(?!\.))|[-!#\$%&\\'\*\+/=\?\^`\{\}\|~\w])*)(?<=[0-9a-z])@))" +
			@"(?(\[)(\[(\d{1,3}\.){3}\d{1,3}\])|(([0-9a-z][-0-9a-z]*[0-9a-z]*\.)+[a-z0-9][\-a-z0-9]{0,22}[a-z0-9]))$";
			
		bool validFormat = Regex.IsMatch(email,
			stackOFPattern,
			RegexOptions.IgnoreCase, TimeSpan.FromMilliseconds(250));

//		string haakedPattern = @"^(?!\.)(""([^""\r\\]|\\[""\r\\])*""|"
//			+ @"([-a-z0-9!#$%&'*+/=?^_`{|}~]|(?<!\.)\.)*)(?<!\.)"
//			+ @"@[a-z0-9][\w\.-]*[a-z0-9]\.[a-z][a-z\.]*[a-z]$";
//
//		bool validFormat |= Regex.IsMatch(email,
//			haakedPattern,
//			RegexOptions.IgnoreCase, TimeSpan.FromMilliseconds(250));

		if (!validFormat)
			return false;
		//validate the top level domain from the IANA list.
		var tld = domainName.Split(new[] { '.' }).Last();
		return tldNames.Contains(tld.ToUpper());
	}
	catch (RegexMatchTimeoutException)
	{
		return false;
	}

}

string[] FetchIANARootDb()
{
	try
	{
		using (WebClient client = new WebClient()) // WebClient class inherits IDisposable
		{
			//client.DownloadFile("https://data.iana.org/TLD/tlds-alpha-by-domain.txt", @"C:\temp\localfile.html");
			// Or you can get the file content without saving it
			string text = client.DownloadString("https://data.iana.org/TLD/tlds-alpha-by-domain.txt");
			var lines = text.Split(new[] { '\n' }).Skip(1).ToArray();
			lines.Count().Dump("Domain name count");
			return lines;
		}
	}
	catch
	{
		return default;
	}
}

Ok, crappy formatting aside (I may get round to prettying that up) cut-n-paste this into LinqPad and run it.

In the end, it’s doubtful that it’s worth validating at all. The true validation is whether or not all of the mail servers between you and the service handle the mail correctly and whether or not you receive a response for signing up. Perhaps a test email button is a better (and more reliable) option! I wonder if BT’s developers can manage that.

Posted in Programming-W-T-F | Leave a comment

Small really is beautiful

I read a lot of stuff on Medium.com, there’s always plenty of articles to waste a full lunch-time on. Today’s lunch went on an article about building a self-contained game in c# in under 8k!

Yep, pretty impressive for a game, even though it is only a clone of the Snake game that made the Nokia phones so popular. However, it’s impressive because it’s in C# and the extent Michal Strehovský went to in removing bits of the executable, not because its 8K. I’m using C# daily and rarely pay any attention to the size of the executables I produce. So, much kudos and respect to the author for picking apart the rubbish that MS puts into those fat, lard-assed .net executables!

A long time ago when I was in the games industry doing arcade conversions to home computers, there use to be a competition for writing games and demo’s with sizes less than 4K. The game competitions were limited to ~2K and the demos were limited to ~4K, this was pre-interweb days.

Because I was working in a game company we wrote everything from scratch in 100% assembly language, no libraries or anything else to help us out. So for a bit of fun, we reduced that game size limit from 2K to 256 bytes just to see what we could come up with! Most of our games at the time were 48k (for the ZX Spectrum) to 512k for the Commodore Amiga / Atari ST, so still tiny by today’s application sizes, but cramming a game with all the rendering and logic into 256 bytes would be a Jedi master level challenge!

I wrote a Space Invaders clone and a Pacman clone. One of the other guys wrote a Scramble clone, but I no longer have the code or executable for that.

Here’s the file size on disk for Space Invaders. 3 bytes under budget!

And a screen capture of the actual game running. This is captured with DOSBox since Windows10 doesn’t run .COM files.

And here’s the file size of pacman :

Looks like its 278 Bytes 🙁 22 bytes over budget, but I’ll not lose sleep over that one.

The maze data is a bit pattern crammed into 16 bytes! These bits are then reflected horizontally and vertically to create the symmetrical maze, the code to render the maze is about 38 bytes.

The games are functional, you can clear all the invaders to win, or if the invaders reach the bottom of the screen it’s game over. With Pacman all the dots are edible, and the ghosts can catch you. The ghosts also follow the maze randomly. Neither application totals up the score, that was another 12 bytes I wasn’t willing to spend.

We were pretty pleased with ourselves, giving each other massive pats on the back thinking we were such smart asses (come on, we were young!). Until some other ultra smart ass produced a 256-byte defender clone in Mode10 Graphics (320×200 pixel mode) with parallax star fields, scores and aliens. We all wanted to drown ourselves in a pool of our own tears. Lesson learned. Pride Swallowed.

Theres always a bigger fish.

Me. Swimming in a salty pool of my own tears. Back in 1990.

All this was at least 16 years ago ( I think that’s the time I copied the files to one of my previous PC’s, not the date they were written ), and now a simple HelloWorld app is measured in megabytes! Even though ram is infinite and CPU cycles are free, I still can’t shake the habit of thinking about the size and speed in every line of code I write, be that in C#, GO, or Elixir. Worrying about branches taken vs not taken and CPU pipeline stalls aren’t really a thing anymore but it will always be printed on my brain somewhere in indelible ink. I’m still having nightmares about parallax starfields. Thinking about these things is still useful for IoT or embedded RTOS work, but those opportunities don’t seem to pay as well as making pretty web pages using languages created by 7-year-olds.

If you’re masochistic enough to want to try writing apps that are all about speed and size, check out Steve Gibson Research. There’s still something elegant about writing entire applications in 10s of kilobytes. I think it’s a lost art, and Steve really is one of the masters.

I don’t think there’s a better intro to this alternate universe than Steves Small Is Beautiful Starter Kit.

Have a go and let me know if you produce anything truly tiny. And by tiny I mean less than 8k!

Posted in Uncategorized | Tagged , , | Leave a comment

Reboot

Turns out this thing is on.

So here we are again, another attempt at posting.

This time, no pressure.

No promises to post once a week (although I might).

No promises of interesting code (although there may be).

No promises of keeping multiple blogs for different subjects (NO there really wont be).

What I do promise however, is to attempt to make each post : mildly interesting, possibly useful, mostly using grammatically correct English,  and bring you wealth, love, hot chicks and world peace*.

So what’s new?

Well, I’ve given up contracting, maybe temporarily – who can refuse doubling your income overnight – but for now its something I must do. I’d just had enough of the same DULL projects, DULL web coding (anyone who knows me knows how much I *love* web work), DULL database work, and well just all round DULL crap. There’s only so many dull form-filling-validating-database-writing-report-generating pieces of crap you can stomach before you want to decapitate yourself in a fit of depression. I’d reached that point a few months ago.  I never thought I’d reach my fill of computers, having been a code monkey since the age of about 12….but it happened around about May ‘14. I was ready to give up tech and go and make furniture or something.

So, I took a permanent position doing quite possibly the most interesting code-work outside of the games industry. It’s re-ignited the flame that burned inside me that gave me the love of all things tech, and better still I’m in at geek-ground-zero.  I’m in the R & D Department of what can only be described at James Bond’s Q division.

It’s still a bit of a trek, an hours drive but it’s well worth it.

I *LOVE* my job.

 

 

*no, not really. Although there may be cheap hookers and coke.

Posted in General | Leave a comment

Feeling a little neglected…

Is this thing on? :/

 

Posted in General | Leave a comment

Fresh starts…

Yup, but this time it really is. Lots of personal stuff happened over the last 18 months that really slowed progression of anything to an absolute halt!

Firstly, personal health, like not being able to see properly or walk was a bit of a show stopper. But thankfully I’m completely recovered. Whew, that was close.

Secondly, I’m now (almost) divorced! Just waiting for the decree nisei and were all done. I still have the house and the kids, but since they’re not really ‘kids’ anymore, I’m on my own at home most of the time.

Thirdly, near death experience in my car. I aquaplaned on all four wheels in torrential rain, so even four wheel drive and traction control couldn’t save me! I spun at least 5 times, took out a speed sign, a large blue motorway sign a few trees and slid to a halt in the mud. Luckily I bounced around on the hard shoulder’s grass verge and didn’t end up back on the carriage way. Contrary to popular belief, I wasn’t driving fast! About the ONLY time I was trundling along to get home and it all goes pear shaped! Still, the air bags and the car were amazing, if I’d been in a lesser car I probably wouldn’t have walked away from it. Even the police officer that attended couldn’t believe I walked out! Awesome car. Unfortunately no more.

I don’t think that’s gonna buff out.

So after the insurance paid out I bought another new car, hopefully this will last longer. It’s a lot slower, but a MUCH better car!

Another advantage is petrol consumption. The Evo managed about 18mpg on average, 25 if I was lucky, and 4 if I was having fun. Cost of commuting was £70 fuel every two days or about £1,500 a month in fuel alone. The new Mercedes manages a good 45mpg taking it easy, and barely drops below 30mpg even when pressing on. So the car might be more expensive, but it will have paid for itself in 3 years. How about that for justification? 🙂

Posted in Featured, Personal | Leave a comment