MacBook Pro Keyboard (2018) Review – Very Good

I was shopping around for a new laptop suitable for software development (Docker, VS Code, C#, node.js, local MySQL and other dbs, etc) and I narrowed it down to X1 Carbon and MacBook Pro 15″. I needed something beefy to run Docker, as well as something light as I carry the laptop to work. Display, battery life, and design are my top considerations.

MacBook hit all the notes – great display with good PPI and aspect ratio, light, good battery life, and great design. Only thing that made me hesitate is the negative reviews on the internet criticizing the butterfly keyboard found on the new MacBooks. My previous MacBook Pro 17″ (from 2010) is still kicking strong, and I never had an issue with its keyboard.

On my workstations at home I use Das Keyboard with Cherry MX Brown. I also have IBM Model M keyboard. So I’m used to higher actuation force.

I was pleasantly surprised at how well the butterfly keyboard feels. It’s similar to a mechanical keyboard — it’s clicky and the key engages without having to travel far. My old MacBook’s keyboard now feels mushy (so does the keyboard on the X1 Carbon).

If you are shopping for a laptop and prefer mechanical keyboards – you can ignore the negative reviews of the MacBook’s keyboards. As far as “sticky key” problem is concerned – if it ever happens I’ll send the laptop to Apple for a fix. It does not worry me. Just remember to keep to keyboard clean.

 

 

 

Idea: Introduce and train AI at drive-throughs

There are no face-to-face interactions at a drive-though. No hand gestures. No facial expressions. Only the microphone and the speaker.

Why not introduce AI and feed the audio and the order data into the engine? The AI will get the info of what was ultimately ordered and if any mistakes were made in the order once it has been picked up (wrong item given, misunderstanding about what was ordered, etc). Often the cashier asks certain things twice because they did not catch the response the first time or perhaps the customer does not have the best English. AI will learn all that stuff.

After some time (years?, months?) the AI will be allowed to take *some* orders — say 2% of the orders and success can be assessed. If AI is doing well, give it more orders. If not, keep training it.

Once AI is doing well and is capable handling *most* orders, juts have the cashier as a backup who can step in in the event of a very complicated order.

How much money will this save? One person. Usually drive-throughs have 1 person taking the orders with the headset and other people handing out the food, taking the payments, etc. So not a huge saving — but if AI is doing well it may be introduced in front counter as well, enhancing or replacing those order screens which do take longer than talking to a person. This is where the real benefit comes in. The only thing that’s faster than talking to a person is having a preset order on your fast food app, but I often change things up so presets don’t work for me. Having multiple presets confuses me.

The main goal here is to achieve efficiency — allowing a location to move more product. Move the front counter personnel and drive though order taking folks into the back (to cook food, etc) and allow for faster intake of order requests. Then the chain can move more product, with the same number of staff.

Idea: Real-time restaurant discounts via Google Maps

Let’s say it’s 2pm on a weekday. Not a busy time for restaurants or lesser known fast food joints. The restaurant must be staffed nonetheless — cooks, waiters, etc. have to be paid and there are other running costs as well.

At around that time, I open Google Maps and type in “food”. The maps app renders me a list of food joints in the area. Wouldn’t it be cool if in real time Google shows a little tag next to a restaurant name, displaying a discount (that may be valid for the next 30 mins, etc.).

The discount can be manually set by the restaurant owner via some sort of Admin interface. The user can enter days, times of day, etc. and the discount amount. Or you can enter a “flash sale”. The owner would have to know how much she is willing to discount and on what date/time based on restaurant’s sales data.

Alternatively, the discount could be auto calculated based on analyzing the sales data in real-time. AI and analytics engine can look at historic records as well as sales targets for the month to see if any discounts should be given. If it makes sense to discount and get more customers at lower margin so that the monthly sales target can be reached faster — then an appropriate discount should be automatically set.

Google Maps can charge some sort of static monthly fee for this feature, which can be used via UI (admin) or via API.

Should you run ASP.NET Core 2.2 Serverless in AWS?

AWS will only support LTS releases of NET Core. Version 2.2 will never be an LTS release. The next version is 3.1 as per the .NET Core Roadmap.

Although you *could* leverage Amazon.Lambda.RuntimeSupport, I would advise against it. In my experience I’ve found both the deployment and the startup times to be lengthier.

When rolling with custom runtime, you are responsible for ensuring you are running on the latest set of patches. Running in such environment may open up a discussion with your InfoSec team. They may ask why you deviated from the AWS standard offering (which includes patching, etc) and what exactly is the custom environment comprised off. If you have a good reason (ie a new feature you absolutely need) then the decision is justified. Otherwise, I would steer clear of unnecessary customizations which incur engineering and operationalization costs.

I found these benchmarks by Zac Charles comparing custom and standard .NET lambda environments. Both the cold, warm, and deployment times are longer for custom environments.

If you are still not convinced and would like to custom roll your environment, I suggest reading Announcing Amazon.Lambda.RuntimeSupport from AWS blog and then looking at CustomRuntimeAspNetCore code sample.

Private and public ES6 class fields are not yet supported outside of Chrome

Created a web app. Used ES6 classes, including the use of private and public fields. Tested the app — all good.

Started to do multi-browser testing and got errors. FireFox was good enough to explicitly mention that class fields are not supported.

I find that if you declare the fields inside the constructor, IntelliSense will still pick them up. Having up-front field declarations, however, does make a class more readable and similar looking to C++ / Java / C#

Field Declaration info on MDN

Tried with FireFox 67.0.2, Edge 44.17763.1.0, and Chrome 75.0.3770.80

Shad SH34 Carbon On KTM Duke 690

Bought the case for 120CAD with shipping from Kimplex Canada. The backrest was an additional 36 CAD. This is a great price for a case, especially one that’s made in Spain.

The Duke 690 is not a big bike, so I was looking for something that would not over power the bike. Style wise, the case matches the looks and the lines of the bike. The case is very lightweight and the mounting bracket does not look too large or too unsightly with the case off. I can fit 1 full size helmet plus gloves and a few other items.

Highly recommended!

Intel UHD Graphics 630 Works for Netflix 4k!

I’ve been struggling with finding information regarding what’s needed to stream Netflix movies in 4K. My HTPC used to have an Nvidia Geforce GT 1030 card and it was NOT possible to play 4K movies due to video ram being less than the required 3GB. Otherwise the card had it all – HDMI 2.0 port, HDCP 2.2 support, and hardware HVEC decode capability.

I was pleasantly surprised that an Intel Core i3-8100 CPU with built-in UHD 630 iGPU can do Netflix 4k without a hitch.

Only caveat is that you will be limited to 30Hz refresh rate (due to HDMI 1.4 limitation) and HDR is not possible. Since HDR/SDR handling in Windows 10 is not all that great (yet), I will wait and use the iGPU for now.

For reference, I have

  • ASRock B360M Pro4 motherboard
  • Intel Core i3-8100
  • HEVC Video Extensions from Device Manufacturer from Micrsoft’s store
  • Latest Intel ME drivers from ASRock
  • In the BIOS, make sure “Software Guard Extensions” and “Intel(R) Platform Trust Technology” are Enabled 

Also, with this setup I was able to go the furthest so far with Ultra HD Blu-ray Advisor from PowerDVD. I only have one thing missing which is HDR, otherwise all other requirements have been met!

Logging to Splunk from AWS Lambda via NLog

First, it’s important to keep you Lambdas lean and mean as warm up times can be significant (in seconds!). Do not load unnecessary libraries like ASP.NET Core in your Lambda functions.

I’ll be showing how to use NLogTarget.Splunk package from NuGet (link).

Just like for any other type of app, you need an NLog.config file (see example here)

I strongly recommend you enable async logging in NLog.config via

targets async="true"

otherwise logging will be done synchronously, slowing down the execution time of your function.

To load NLog.config, do the following in the beginning of your function:

var logFactory = LogManager.LoadConfiguration("NLog.config");
var logger = logFactory.GetCurrentClassLogger();
// continue on using logger. LogManager.GetCurrentClassLogger() also works

NLog.config must be in your Lambda package, so ensure it’s copied there upon build (set it as “Content/Copy If Newer” in Visual Studio)

Prior to completing your Lambda’s execution, ensure you add

LogManager.Flush(new TimeSpan(0, 0, 3)); // flush any remaining messages. Max 3 seconds

This will allow NLog to “catch up” writing any outstanding log entries. If you have a global exception handler in your lambda function, make sure to add the same line there as well. If you do not — you may have a lambda that fails and no logs that are sent out.

That’s it. To learn more about using NLogTarget.Splunk, see documentation here