I finally got my beets powder for the upcoming hiking trip! My plan is to get a “serving” (see the BeetElite box) every day for the 2 weeks preceding the trip.
So, what does beets have to do with hiking you say? Well, I’m hiking in altitude (Mt Massive 14,421ft elevation) and Nitrates are a powerful vasodilator which will help shuttle more oxygen throughout the body. There’s more to it though: supplementing with Nitrate is helping in generate energy while using less oxygen. How much? Well somewhere around 19% according to this article from the US National Library of Medicine (NML): Dietary Nitrate Supplementation and Exercise Performance
It says that “NO3 supplementation is emerging as a promising nutritional aid, with potentially beneficial applications for the wide variety of individuals ascending to altitude each year.”
It does have its limitations. In this other article from Scientific American:
The researcher concludes by saying “If I had a bottle of beets around I would take it for sure. But that won’t bring you to Mount Everest just by drinking beetroot.”
So, it’s not all Voodoo mumbo jumbo, it’s an actual biohacking trick!
This is a list of things that I’ve learned over the years. Nothing is really important (except for good sole with good grip). I follow these ideas to make my trip a bit nicer but, in the end, please do what you want and enjoy the scenery. We will find a way to have a blast!
Shoes
Hiking boots vs trail runners. Trail runners are lighter and embrace the rocks better (supple). You will get a better feel for the terrain but it’s harder on feet. Hiking boots offers a better protection for your ankles (and all around protection).
I normally bring both because I have them but now, I only use my trail runners.
Anyhow, a sole with good grip is the only important thing.
Special note: Make sure that you have enough room for your toes. Your feet will swell and going downhill for 4-5 hours straight may cause painful blue toe nails.
Socks
2 pairs recommended : wear one and keep one in the backpack for quick exchange.
Avoid cotton. Merino wool blend is a good choice (low friction, wicking, quick dry and no smell).
Darn Tough is a good brand but a bit expensive ($25).
I normally just get some unknown brand from Academy for cheaper.
Pants
Protect from cold but loose for movement.
Special note: There is some rock scrambling on top of 14ers (hands and feet parts).
Underwear
Avoid cotton. Consider synthetic sport boxer briefs: water wicking, protects inner thighs…
Core/layering
3 layers system: Synthetic base layer to wick away sweat. Middle layer: puffy, air pockets to isolate. Finally, wind breaker (with hood and rain proof) to keep middle layer’s warmth.
Gloves
Need a good pair for the 14ers.
Head
Beanie for the 14ers. Baseball cap, rag.
Hiking poles
Optional, easier for legs while coming down (about %10-20 relief on the legs).
Head lamp/flashlights
Need it for early morning start (14ers) and when running out of time in the evening, although near full moon expected this year.
Food
Need a boost for 14ers (carbs/salt): Trail mix with some salted nuts/sweet chocolate/dry fruits.
If we are not getting a early start (i.e. not doing a 14ers), we stop by the coffee shop and get some pastries to go.
Water
Need a 2L container at least. May need extra liter in a side container for 14ers.
Backpack
Big enough to fit beanie, gloves, mid layer, outer layer, food and water. I have a 22 liters and struggling a bit.
Other things (optional):
Sun tan lotion
Sunglasses
Tylenol
Athletic tape/gauze
Salt/electrolyte
Pocket knife
String/rope
Tissues (my nose goes wild! also, can serve as TP)
One of the main issue when creating and releasing docker images is making sure that we do not reveal any secret information (password, etc).
In Node, we often use a .env file to achieve this goal. Sadly, we would need to include this file in our Docker image which would make it unsecure. Indeed, if such file is added to the image, anybody who get the image would be able to see the content of that file.
So instead, we will want to pass our secret information when we are running the image. There are 2 ways to pass environment variables to Docker:
Using the option -e
1
docker run [...] -e my_connection_string="xxxxx" -e my_password="xxx" my_node_container
Using the env-file docker option
For this method, you’ll need to create a file containing the list of KEY=Value pairs. Example: my_env.list
From there, you will be able access these variables from Node by using process.env.{KEY}.
Please note that, as a general rule, you should always follow the motto “batteries included but removable”. Meaning that you should code your Node application to have default values (when possible) so the software will run without providing these environment variables. So it can run “straight out of the box”.
SSL certificates can provide encryption over TCP between the server and the browser. Until recently, I did not have much of a need for it since my work was done on the Intranet. I did have many websites and tools that I was self-hosting on AWS, but I really never had to add any sort of protection.
I always knew that it would be a “must-have” if I was ever wanted to open my applications to other users or even put more personal content on the web.
Anyhow, now that I had some time to play around with it, I am shocked to see how easy it was to get started with SSL… and it was FREE!
To tell you the truth, I’m mostly writing this blog as a reminder on how to do it since it was all done in less than 30 minutes!
Certificate Authority (CA)
Only a CA can issue a certificate that you will need to enable HTTPS. For my websites, I went with Let’s Encrypt. We will not need to read more on this website as they have a client software called Certbot who will do all the work for us!
If you go to the Certbot website, you’ll be able to enter the software (webserver) and the system (OS) that you are using. In my case, I’m using Apache on Ubuntu 18.04. Once you select your software/system, you’ll be provided instructions on how to do it.
I’ll repeat some of the instructions here, just so I can remember the steps that I took, but I encourage you to go directly on their website. They did an excellent job at detailing every step and it went without an itch.
I did have some issues afterward but it was related to the port 443 used by SSL.
I’ll explain the steps that I took to solve this issue later on.
Voila! It should now be working for you… well.. almost.
Opening the port 443 on AWS
I needed to open the port 443 in the AWS Lightsail console for my instance. To do so, click on the vertical-three-dots icon of your instance and click Manage. From there, go to the Networking tab. We will change the firewall to make sure that we can communicate with port 443 of our instance.
Click “+ Add another”
Select HTTPS from the dropdown.
Click Save
I also had the firewall enabled within my instance, so I had to open that post also:
To see if the firewall (utw) is active:
1
sudo ufw status
To open the https port:
1
sudo ufw allow https
Limited time only!
Sadly, this certificate will expire every 3 months, but Certbot made it quite easy to renew. Here’s an example for renewing that you can run right now (it’s a dry-run only, without effectively renewing the certificate).
1
sudo certbot renew --dry-run
If you entered your email correctly while setting it up, you should receive a notification when it’s about time to renew. You will then need to re-enter this command without the --dry-run flag.
The SOLID principles are a set of software design principles that teach us how we can structure our functions and classes in order to be as robust, maintainable and flexible as possible.
S - Single-responsiblity principle
A class should have one and only one reason to change, meaning that a class should have only one job.
O - Open-closed Principle
Objects or entities should be open for extension but closed for modification.
L - Liskov substitution principle
Ability to replace any instance of a parent class with an instance of one of its child classes without negative side effects.
I - Interface segregation principle
A client should never be forced to implement an interface that it doesn’t use or clients shouldn’t be forced to depend on methods they do not use.
D - Dependency Inversion Principle
Entities must depend on abstractions, not on concretions. It states that the high-level module must not depend on the low-level module, but they should depend on abstractions.
Most of these design patterns are specifically concerned with communication between objects.
Chain of responsibility : Chain of responsibility delegates commands to a chain of processing objects.
Command : Command creates objects which encapsulate actions and parameters.
Interpreter : Interpreter implements a specialized language.
Iterator : Iterator accesses the elements of an object sequentially without exposing its underlying representation.
Mediator : Mediator allows loose coupling between classes by being the only class that has detailed knowledge of their methods.
Memento : Memento provides the ability to restore an object to its previous state (undo).
Observer : Observer is a publish/subscribe pattern which allows a number of observer objects to see an event.
State : State allows an object to alter its behavior when its internal state changes.
Strategy : Strategy allows one of a family of algorithms to be selected on-the-fly at runtime.
Template method : Template method defines the skeleton of an algorithm as an abstract class, allowing its subclasses to provide concrete behavior.
Visitor : Visitor separates an algorithm from an object structure by moving the hierarchy of methods into one object.
Creational
Creational patterns are ones that create objects, rather than having to instantiate objects directly. This gives the program more flexibility in deciding which objects need to be created for a given case.
Abstract factory : Abstract factory groups object factories that have a common theme.
Builder : Builder constructs complex objects by separating construction and representation.
Factory method : Factory method creates objects without specifying the exact class to create.
Prototype : Prototype creates objects by cloning an existing object.
Singleton : Singleton restricts object creation for a class to only one instance.
Structural
These concern class and object composition. They use inheritance to compose interfaces and define ways to compose objects to obtain new functionality.
Adapter : Adapter allows classes with incompatible interfaces to work together by wrapping its own interface around that of an already existing class.
Bridge : Bridge decouples an abstraction from its implementation so that the two can vary independently.
Composite : Composite composes zero-or-more similar objects so that they can be manipulated as one object.
Decorator : Decorator dynamically adds/overrides behaviour in an existing method of an object.
Facade : Facade provides a simplified interface to a large body of code.
Flyweight : Flyweight reduces the cost of creating and manipulating a large number of similar objects.
Proxy : Proxy provides a placeholder for another object to control access, reduce cost, and reduce complexity.
In this example, the configuration will prompt Apache to serve any requests to app1 straight to our app running on port 3000 while app2 requests will serve the app running under port 5000.
On your Home page, click on “create instance”. For this post, I’ve selected OS only and Ubuntu 18.
Once you’ve created a VSP instance, click on the 3 dots icon and select Manage. In the Networking tab, click on “create static IP”. This way you’ll be able to stop/start the instance without losing the IP that was assigned to it.
Downloading the SSH key
The next step is to download our SSH key so we can use SSH to remote in our server. So, go to your account‘s page (toolbar on top) and, on the SSH Keys tab, download the key.
Put the key in you home directory under the .ssh subdirectory (by convention)
Example:
1
c:/users/denis/.ssh/myKey.pem
To connect with an ssh client (I use GIT bash), enter the following line:
After that, edit and change “ubuntu” to “technomuch” (I use the nano text editor)
1
sudo nano /etc/sudoers.d/technomuch
Create an SSH key to use for our new user
On the client side (local), create a new SSH key from ssh-keygen.
NOTE: on Windows, I used the GIT Bash to create the SSH key pair.
1
ssh-keygen
Then enter the name of a name for the key that you will create. After that, you’ll be asked to set a passphrase to protect the key.
Two files will be created: a file without an extension and a “.pub” file. The “.pub” file is the public key that we will eventually copy to our sever and the other is our private key that you should never share with anyone.
SSH set up
We will now create the SSH authorized keys file, set its permissions and finally, change its ownership.
Now that we have the file, you’ll need to copy/paste the content of the xxxx.pub file that we’ve just created using the ssh-keygen software.
1
sudo nano /home/technomuch/.ssh/authorized_keys
Use the console to reboot
PS: Public key file format
The values in the xxx.pub file should be on one line and look like this:
1
ssh-rsa AAAAB3<...very long string...>Tx5I55KMQ== rsa-key-20200820
But, if you have generated the key using another tool like PuTTYgen, you may get a key looking like this:
1 2 3 4 5 6 7
---- BEGIN SSH2 PUBLIC KEY ---- Comment: "rsa-key-20200820" AAAAB3NzaC1yc2EAAAABJQAAAQEAki9hkBcpDBoS+7B/GdaLMP+Clu4ywfZgZi80 ... more lines ... +Qy3XKjwPD9AtNOD+vIayR5/T4OSF1ooEzcMarcS8xu3gTEoykH55f8IFZU0TyHU EEQsiSsbNeV7uW44YAUmX+AWM+IODGF2YirISHGe8Tx5I55KMQ== ---- END SSH2 PUBLIC KEY ----
If so you can always reformat it like this ssh-rsa <SSH Public key> <comment> all in one line.
Or, better yet, reopen the private key with PuTTYgen by clicking “Load” and selecting the file. You should then see the public key in the text field titled “Public key for pasting into OpenSSH authorized_keys file” (quite descriptive…).
Alternative
You could have created the authorized_keys file and the .ssh folder with the technomuch’s account directly.
In order to do so, you would have needed to enable password authentication while you are doing the work.
To enable password authentication, you would have needed to edit the file:
1
sudo nano /etc/ssh/sshd_config
And change the line “PasswordAuthentication no” to “PasswordAuthentication yes”
Just don’t forget to turn it back to no when you’re done.
Login using SSH key
Now, you should be able to use the private key to login:
ATTENTION: After this step, you’ll no longer be able to use the online SSH option offered by amazon. Trying to login via the “Connect using SSH” button will just hang as it will still try to use the port 22 for SSH which our sever no longer uses.
Make sure you update the LightSail firewall by creating a custom rule for TCP port 2200 (Networking tab of the instance)
While you are at it, add the custom UDP 123 (NTP)
We will now configure SSH to use the port 2200 by editing the SSH config file:
Javascript has been around for a while and, up until 2015, there were no official ways to deal with code separated in multiple files (modules). Indeed, ES2015 (ECMAScript 2015) (initially known as ES6) contains the first specification from the “ECMAScript Harmony” project which introduce modules, classes, etc. Sadly, Node.js is not quite ready for the implementation of ECMAScript 2015 modules. In version current version (13.11.0), it’s offered as “Stability: 1 - Experimental”. Because of this state of affairs, multiple libraries and frameworks (CommonJS, ADM, UMD, RequireJS, etc) have tried over the years to remediate this shortcoming.
NodeJS has been using and is currently using CommonJS (at least until it is fully supporting the ES2015 Modules).
Require act as if it was creating 2 objects; exports and module.exports. The variable exports is just a shortcut to module.exports and they are basically pointing to the same object. We will see later on that we may find ourselves in a bit of trouble of we miss use this shortcut.
Now that the stage is set, let’s actually look at our imported files one by one.
The add.js file
1 2 3 4
//add.js module.exports = function ( first, second ) { return first + second }
This is the easiest form. The file exports an anonymous function.
In this case, our require in main.js:
1 2 3
//main.js const add = require('./add') //...
Can be interpreted like this:
1 2 3 4 5
//main.js const add = function ( first, second ) { return first + second } //...
The minus.js file
1 2 3 4
//minus.js exports.minus = function ( first, second ) { return first + second }
Please remember that we are importing it with the following code:
1 2 3 4
//main.js //... const { minus } = require('./minus') //...
Please notice the brackets surrounding our minus function. In order to understand this one, we will have to explore a new syntax provided by ES6. It’s called the destructuring assignment.
I won’t cover this syntax at length here but here’s a quick example:
So, basically it’s extracting variables (destructuring) the object based on its properties.
Then, when we used the following code:
1 2 3 4
// main.js //... const { minus } = require('./minus') //...
And the definition in minus.js :
1 2 3 4
//minus.js exports.minus = function ( first, second ) { return first + second }
Here’s how we can interpret that call:
1 2 3 4 5 6 7 8
const { minus } = { minus : function ( first, second ) { return first + second }}
// Which is like : const minus = function ( first, second ) { return first + second }
We’ve created a variable minus pointing to the value of the minus property (which is a function) returned by the require statement.
In summary, this is why we had to use brackets {} to get our function. The add.js was returning a single function so, we could just pick it up directly into the variable add. But, in the case of the minus.js, the require was returning an object and we had to destructure the object into our minus variable.
The other_operators.js file
1 2 3 4 5 6 7 8 9 10 11 12 13
//other_operators.js functionmultiplication (first, second) { return first * second }
functiondiv (first, second) { return first / second }
module.exports = { mult: multiplication, div: div }
We can apply pretty much the same logic than we used for destructuring the object returned by require in the minus.js file, except that we have 2 functions:
1 2 3 4
//main.js //... const { mult, div } = require('./other-operators') //...
For the next line in main.js, we wanted to show that we do need to destructure the object returned by require:
As we pointed out earlier, the exports variable is just a shortcut for module.exports. They are both pointing at the same object. It’s easy to get wrapped up in your code and do something like this:
1
exports = { test : function () { console.log ("ops!"); } }
The problem with this, is that you just moved the pointer of exports to a new object so module.exports and exports are no longer pointing at the same object… but the require statement will only return the object that module.exports is pointing to!
Lost in space?
If you become unsure of what is on your module.exports, you can always temporarily add the following code at the end of the file and run it through node.
1
console.log(process.mainModule.exports)
For example, if we add this line at the end of other-operators.js, we would get the following output:
If you are in doubt, temporary add this code to your file and run it through node:
1
console.log(process.mainModule.exports)
Finally, if you are returning an object and using the exports variable shortcut instead of the module.exports variable, make sure that you are adding properties to the object, not overriding it. And if you are just returning a function, use the module.exports. And, at last, if you just don’t want to bother, just use module.exports!