Beets for altitude

I finally got my beets powder for the upcoming hiking trip! My plan is to get a “serving” (see the BeetElite box) every day for the 2 weeks preceding the trip.

So, what does beets have to do with hiking you say? Well, I’m hiking in altitude (Mt Massive 14,421ft elevation) and Nitrates are a powerful vasodilator which will help shuttle more oxygen throughout the body. There’s more to it though: supplementing with Nitrate is helping in generate energy while using less oxygen. How much? Well somewhere around 19% according to this article from the US National Library of Medicine (NML): Dietary Nitrate Supplementation and Exercise Performance

In this article from the same publication:

“Beet-ing” the Mountain: A Review of the Physiological and Performance Effects of Dietary Nitrate Supplementation at Simulated and Terrestrial Altitude

It says that “NO3 supplementation is emerging as a promising nutritional aid, with potentially beneficial applications for the wide variety of individuals ascending to altitude each year.”

It does have its limitations. In this other article from Scientific American:

Beet Juice Could Help Body Beat Altitude

The researcher concludes by saying “If I had a bottle of beets around I would take it for sure. But that won’t bring you to Mount Everest just by drinking beetroot.”

So, it’s not all Voodoo mumbo jumbo, it’s an actual biohacking trick!

Here’s some more articles that I found:

How Does Beet Juice Improve Athletic Performance?

There’s New Data on the Beet Juice Boost

Peak Your Climbing Performance With Nitrate Supplements

Leadville 2021

This is a list of things that I’ve learned over the years. Nothing is really important (except for good sole with good grip). I follow these ideas to make my trip a bit nicer but, in the end, please do what you want and enjoy the scenery. We will find a way to have a blast!

Shoes

Hiking boots vs trail runners. Trail runners are lighter and embrace the rocks better (supple). You will get a better feel for the terrain but it’s harder on feet. Hiking boots offers a better protection for your ankles (and all around protection).

I normally bring both because I have them but now, I only use my trail runners.

Anyhow, a sole with good grip is the only important thing.

Special note:
Make sure that you have enough room for your toes. Your feet will swell and going downhill for 4-5 hours straight may cause painful blue toe nails.

Socks

2 pairs recommended : wear one and keep one in the backpack for quick exchange.

Avoid cotton. Merino wool blend is a good choice (low friction, wicking, quick dry and no smell).

Darn Tough is a good brand but a bit expensive ($25).

I normally just get some unknown brand from Academy for cheaper.

Pants

Protect from cold but loose for movement.

Special note: There is some rock scrambling on top of 14ers (hands and feet parts).

Underwear

Avoid cotton. Consider synthetic sport boxer briefs: water wicking, protects inner thighs…

Core/layering

3 layers system: Synthetic base layer to wick away sweat. Middle layer: puffy, air pockets to isolate. Finally, wind breaker (with hood and rain proof) to keep middle layer’s warmth.

Gloves

Need a good pair for the 14ers.

Beanie for the 14ers.
Baseball cap, rag.

Hiking poles

Optional, easier for legs while coming down (about %10-20 relief on the legs).

Head lamp/flashlights

Need it for early morning start (14ers) and when running out of time in the evening, although near full moon expected this year.

Food

Need a boost for 14ers (carbs/salt): Trail mix with some salted nuts/sweet chocolate/dry fruits.

If we are not getting a early start (i.e. not doing a 14ers), we stop by the coffee shop and get some pastries to go.

Water

Need a 2L container at least. May need extra liter in a side container for 14ers.

Backpack

Big enough to fit beanie, gloves, mid layer, outer layer, food and water. I have a 22 liters and struggling a bit.

Other things (optional):

Sun tan lotion

Sunglasses

Tylenol

Athletic tape/gauze

Salt/electrolyte

Pocket knife

String/rope

Tissues (my nose goes wild! also, can serve as TP)

Small Purell bottle (for the TP eventuality)

Small trash bags (for the TP eventuality)

Passing environment variables from Docker to Nodejs

One of the main issue when creating and releasing docker images is making sure that we do not reveal any secret information (password, etc).

In Node, we often use a .env file to achieve this goal. Sadly, we would need to include this file in our Docker image which would make it unsecure. Indeed, if such file is added to the image, anybody who get the image would be able to see the content of that file.

So instead, we will want to pass our secret information when we are running the image. There are 2 ways to pass environment variables to Docker:

  • Using the option -e

    1
    docker run  [...] -e my_connection_string="xxxxx" -e my_password="xxx" my_node_container
  • Using the env-file docker option

For this method, you’ll need to create a file containing the list of KEY=Value pairs.
Example:
my_env.list

1
2
3
my_connection_string=xxxxxx
my_password=yyyyyyyy
my_secret=zzzzz

Then, run the container using the --env-file option:
1
docker run [...] --env-file ./my_env.list my_node_container

See this document for more details.

From there, you will be able access these variables from Node by using process.env.{KEY}.

Please note that, as a general rule, you should always follow the motto “batteries included but removable”. Meaning that you should code your Node application to have default values (when possible) so the software will run without providing these environment variables. So it can run “straight out of the box”.

Simple example:

Here’s an example to get you started

File: env_test.js

1
console.log(process.env);

File: my_env.list

1
2
3
my_connection_string=xxxxxx
my_password=yyyyyyyy
my_secret=zzzzz

File: Dockerfile

1
2
3
4
5
6
7
8
9
FROM node:12.19.0-alpine3.10

RUN mkdir -p /usr/app/src

WORKDIR /usr/app/src

COPY env_test.js .

CMD [ "node", "env_test" ]

Then build your container:

1
docker image build -t node-test .

Run it with the -e flag:

1
docker container run -e my_test="this is a test" node-test

Or, run it with the –env-file flag:

1
docker container run --env-file ./my_env.list node-test

Free SSL certificate

SSL certificates can provide encryption over TCP between the server and the browser. Until recently, I did not have much of a need for it since my work was done on the Intranet. I did have many websites and tools that I was self-hosting on AWS, but I really never had to add any sort of protection.

I always knew that it would be a “must-have” if I was ever wanted to open my applications to other users or even put more personal content on the web.

Anyhow, now that I had some time to play around with it, I am shocked to see how easy it was to get started with SSL… and it was FREE!

To tell you the truth, I’m mostly writing this blog as a reminder on how to do it since it was all done in less than 30 minutes!

Certificate Authority (CA)

Only a CA can issue a certificate that you will need to enable HTTPS. For my websites, I went with Let’s Encrypt. We will not need to read more on this website as they have a client software called Certbot who will do all the work for us!

If you go to the Certbot website, you’ll be able to enter the software (webserver) and the system (OS) that you are using. In my case, I’m using Apache on Ubuntu 18.04. Once you select your software/system, you’ll be provided instructions on how to do it.

I’ll repeat some of the instructions here, just so I can remember the steps that I took, but I encourage you to go directly on their website. They did an excellent job at detailing every step and it went without an itch.

I did have some issues afterward but it was related to the port 443 used by SSL.

I’ll explain the steps that I took to solve this issue later on.

Certbot (visit website)

These are the steps that I took… for a detailed explanation, go to their website!

1
2
3
4
5
6
7
sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository universe
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot python-certbot-apache
sudo certbot --apache

Voila! It should now be working for you… well.. almost.

Opening the port 443 on AWS

I needed to open the port 443 in the AWS Lightsail console for my instance. To do so, click on the vertical-three-dots icon of your instance and click Manage. From there, go to the Networking tab. We will change the firewall to make sure that we can communicate with port 443 of our instance.

  • Click “+ Add another”
  • Select HTTPS from the dropdown.
  • Click Save

I also had the firewall enabled within my instance, so I had to open that post also:

To see if the firewall (utw) is active:

1
sudo ufw status

To open the https port:

1
sudo ufw allow https

Limited time only!

Sadly, this certificate will expire every 3 months, but Certbot made it quite easy to renew. Here’s an example for renewing that you can run right now (it’s a dry-run only, without effectively renewing the certificate).

1
sudo certbot renew --dry-run

If you entered your email correctly while setting it up, you should receive a notification when it’s about time to renew. You will then need to re-enter this command without the --dry-run flag.

SOLID Principles

The SOLID principles are a set of software design principles that teach us how we can structure our functions and classes in order to be as robust, maintainable and flexible as possible.

S - Single-responsiblity principle

A class should have one and only one reason to change, meaning that a class should have only one job.

O - Open-closed Principle

Objects or entities should be open for extension but closed for modification.

L - Liskov substitution principle

Ability to replace any instance of a parent class with an instance of one of its child classes without negative side effects.

I - Interface segregation principle

A client should never be forced to implement an interface that it doesn’t use or clients shouldn’t be forced to depend on methods they do not use.

D - Dependency Inversion Principle

Entities must depend on abstractions, not on concretions. It states that the high-level module must not depend on the low-level module, but they should depend on abstractions.

Sources

WIKI: SOLID principles of object-oriented programming

Blog: S.O.L.I.D: The First 5 Principles of Object Oriented Design

Design Pattern

Behavioral Patterns

Most of these design patterns are specifically concerned with communication between objects.

  • Chain of responsibility : Chain of responsibility delegates commands to a chain of processing objects.

  • Command : Command creates objects which encapsulate actions and parameters.

  • Interpreter : Interpreter implements a specialized language.

  • Iterator : Iterator accesses the elements of an object sequentially without exposing its underlying representation.

  • Mediator : Mediator allows loose coupling between classes by being the only class that has detailed knowledge of their methods.

  • Memento : Memento provides the ability to restore an object to its previous state (undo).

  • Observer : Observer is a publish/subscribe pattern which allows a number of observer objects to see an event.

  • State : State allows an object to alter its behavior when its internal state changes.

  • Strategy : Strategy allows one of a family of algorithms to be selected on-the-fly at runtime.

  • Template method : Template method defines the skeleton of an algorithm as an abstract class, allowing its subclasses to provide concrete behavior.

  • Visitor : Visitor separates an algorithm from an object structure by moving the hierarchy of methods into one object.

Creational

Creational patterns are ones that create objects, rather than having to instantiate objects directly. This gives the program more flexibility in deciding which objects need to be created for a given case.

  • Abstract factory : Abstract factory groups object factories that have a common theme.

  • Builder : Builder constructs complex objects by separating construction and representation.

  • Factory method : Factory method creates objects without specifying the exact class to create.

  • Prototype : Prototype creates objects by cloning an existing object.

  • Singleton : Singleton restricts object creation for a class to only one instance.

Structural

These concern class and object composition. They use inheritance to compose interfaces and define ways to compose objects to obtain new functionality.

  • Adapter : Adapter allows classes with incompatible interfaces to work together by wrapping its own interface around that of an already existing class.

  • Bridge : Bridge decouples an abstraction from its implementation so that the two can vary independently.

  • Composite : Composite composes zero-or-more similar objects so that they can be manipulated as one object.

  • Decorator : Decorator dynamically adds/overrides behaviour in an existing method of an object.

  • Facade : Facade provides a simplified interface to a large body of code.

  • Flyweight : Flyweight reduces the cost of creating and manipulating a large number of similar objects.

  • Proxy : Proxy provides a placeholder for another object to control access, reduce cost, and reduce complexity.

Learn more

WIKI : https://en.wikipedia.org/wiki/Software_design_pattern

Book: Design Patterns: Elements of Reusable Object-Oriented Software

Book: Head First Design Patterns: A Brain-Friendly Guide

YouTube: Christopher Okhravi

Multiple webapps on AWS Node.js

In this post, I will show you how to host multiple Node.js web applications on the same AWS Ligthsail Node.JS instance.

Basically, we will be using the Apache web server to forward to the right application base on the requests’ path.

Installing our applications on the server

Under /home/bitnami folder, create an apps subfolder (just a convention). This is where we will put your node.js apps.

Once your apps are copied there, you could start the with forever so it won’t shut down when you log off and they will restart on a server reboot.

Example:

1
forever start src/app1.js

Now your apps should be running on whatever port you have them configured under.

Configuring Apache as a proxy

We need to tell Apache which app to use and what port they are running under.

In order to do so, we will edit the apps prefix file:

1
nano /opt/bitnami/apache2/conf/bitnami/bitnami-apps-prefix.conf

And add the following lines (example)

1
2
3
4
5
6
7
8
9
# Bitnami applications installed in a prefix URL

ProxyPass /app1 http://127.0.0.1:3000/app1
ProxyPassReverse /app1 http://127.0.0.1:3000/app1

ProxyPass /app2 http://127.0.0.1:5000
ProxyPassReverse /app1 http://127.0.0.1:5000

# ...

In this example, the configuration will prompt Apache to serve any requests to app1 straight to our app running on port 3000 while app2 requests will serve the app running under port 5000.

Please note that our app1 will receive a request with a root of app1 (http://127.0.0.1:3000/app1) while the app2 will not receive any prefix (http://127.0.0.1:5000).

Restart apache:

1
sudo /opt/bitnami/ctlscript.sh restart apache

Now both apps should be available from the internet:

h​ttp://xx.xx.xx.xx/app1/test/test?name=George
h​ttp://xx.xx.xx.xx/app2/about

Useful command

To stop a forever process by name (app1.js in our example):

1
uid=$(forever list | grep app1.js | cut -c24-27) && forever stop $uid

AWS Virtual Private Server (VPS)

In this post, I will show you how to create, configure and secure a Virtual Private Server (VPS) on Amazon Web Services (AWS) Lightsail.

Creating a VSP instance

First up, go to the Amazon website for Lightsail and sign up for an account if you don’t already have one.

On your Home page, click on “create instance”. For this post, I’ve selected OS only and Ubuntu 18.

Once you’ve created a VSP instance, click on the 3 dots icon and select Manage. In the Networking tab, click on “create static IP”. This way you’ll be able to stop/start the instance without losing the IP that was assigned to it.

Downloading the SSH key

The next step is to download our SSH key so we can use SSH to remote in our server. So, go to your account‘s page (toolbar on top) and, on the SSH Keys tab, download the key.

Put the key in you home directory under the .ssh subdirectory (by convention)

Example:

1
c:/users/denis/.ssh/myKey.pem

To connect with an ssh client (I use GIT bash), enter the following line:

1
ssh [user]@[IP address] -p [port] -i [full path to ssh key]

Example:

1
ssh ubuntu@3.89.214.99 -p 22 -i c:/users/denis/.ssh/myKey.pem

Getting the server up-to-date

Prepare the list of updates from the package lists:

1
sudo apt-get update

Then actually preform the update:

1
sudo apt-get upgrade

NOTE: For the sshd config change question, I normally select install the package maintainer’s version

Package no longer required?

1
sudo apt-get autoremove

Install finger (optional)

1
sudo apt-get install finger

Adding a new user

In this example, we will create a user named technomuch

1
sudo adduser technomuch

Set the new password and general information for the new user.

Now, let’s add sudo capabilities to our new user by copying the default file

1
sudo cp /etc/sudoers.d/90-cloud-init-users /etc/sudoers.d/technomuch

After that, edit and change “ubuntu” to “technomuch” (I use the nano text editor)

1
sudo nano /etc/sudoers.d/technomuch

Create an SSH key to use for our new user

On the client side (local), create a new SSH key from ssh-keygen.

NOTE: on Windows, I used the GIT Bash to create the SSH key pair.

1
ssh-keygen

Then enter the name of a name for the key that you will create. After that, you’ll be asked to set a passphrase to protect the key.

Two files will be created: a file without an extension and a “.pub” file. The “.pub” file is the public key that we will eventually copy to our sever and the other is our private key that you should never share with anyone.

SSH set up

We will now create the SSH authorized keys file, set its permissions and finally, change its ownership.

1
2
3
4
5
6
sudo mkdir /home/technomuch/.ssh
sudo touch /home/technomuch/.ssh/authorized_keys
sudo chmod 700 /home/technomuch/.ssh
sudo chmod 644 /home/technomuch/.ssh/authorized_keys
sudo chown technomuch:technomuch /home/technomuch/.ssh
sudo chown technomuch:technomuch /home/technomuch/.ssh/authorized_keys

Now that we have the file, you’ll need to copy/paste the content of the xxxx.pub file that we’ve just created using the ssh-keygen software.

1
sudo nano /home/technomuch/.ssh/authorized_keys

Use the console to reboot

PS: Public key file format

The values in the xxx.pub file should be on one line and look like this:

1
ssh-rsa AAAAB3<...very long  string...>Tx5I55KMQ== rsa-key-20200820

But, if you have generated the key using another tool like PuTTYgen, you may get a key looking like this:

1
2
3
4
5
6
7
---- BEGIN SSH2 PUBLIC KEY ----
Comment: "rsa-key-20200820"
AAAAB3NzaC1yc2EAAAABJQAAAQEAki9hkBcpDBoS+7B/GdaLMP+Clu4ywfZgZi80
... more lines ...
+Qy3XKjwPD9AtNOD+vIayR5/T4OSF1ooEzcMarcS8xu3gTEoykH55f8IFZU0TyHU
EEQsiSsbNeV7uW44YAUmX+AWM+IODGF2YirISHGe8Tx5I55KMQ==
---- END SSH2 PUBLIC KEY ----

If so you can always reformat it like this ssh-rsa <SSH Public key> <comment> all in one line.

Or, better yet, reopen the private key with PuTTYgen by clicking “Load” and selecting the file. You should then see the public key in the text field titled “Public key for pasting into OpenSSH authorized_keys file” (quite descriptive…).

Alternative

You could have created the authorized_keys file and the .ssh folder with the technomuch’s account directly.

In order to do so, you would have needed to enable password authentication while you are doing the work.

To enable password authentication, you would have needed to edit the file:

1
sudo nano /etc/ssh/sshd_config

And change the line “PasswordAuthentication no” to “PasswordAuthentication yes”

Just don’t forget to turn it back to no when you’re done.

Login using SSH key

Now, you should be able to use the private key to login:

1
ssh technomuch@3.89.214.99 -p 22 -i c:/users/denis/.ssh/generatedKey

Change the SSH port from 22 to 2200

ATTENTION: After this step, you’ll no longer be able to use the online SSH option offered by amazon. Trying to login via the “Connect using SSH” button will just hang as it will still try to use the port 22 for SSH which our sever no longer uses.

Make sure you update the LightSail firewall by creating a custom rule for TCP port 2200 (Networking tab of the instance)

While you are at it, add the custom UDP 123 (NTP)

We will now configure SSH to use the port 2200 by editing the SSH config file:

1
sudo nano /etc/ssh/sshd_config

And, uncomment Port 22 and change to Port 2200

Use the console to reboot

Now, you can ssh in on port 2200

1
ssh technomuch@3.89.214.99 -p 2200 -i c:/users/denis/.ssh/generatedKey

Make sure that secure the server by enabling a firewall

Deny all incoming traffic, then open some ports

1
2
3
4
5
6
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ntp
sudo ufw allow 2200/tcp
sudo ufw allow www
sudo ufw enable

Firewall status (already installed by default but inactive…)

1
sudo ufw status

Summary

So, in retrospect, we’ve created a server instance, and updated any packages to the latest version.

we’ve created a new user and gave it sudo capabilities.

We’ve created a new SSH key and set our new user with the public key.

After changing the SSH port from 22 to 2200, we’ve enable a firewall.

That’s it, you should now be running a secure AWS VPS! Congratulation!

Node.js modules (CommonJS modules)

Javascript has been around for a while and, up until 2015, there were no official ways to deal with code separated in multiple files (modules). Indeed, ES2015 (ECMAScript 2015) (initially known as ES6) contains the first specification from the “ECMAScript Harmony” project which introduce modules, classes, etc. Sadly, Node.js is not quite ready for the implementation of ECMAScript 2015 modules. In version current version (13.11.0), it’s offered as “Stability: 1 - Experimental”. Because of this state of affairs, multiple libraries and frameworks (CommonJS, ADM, UMD, RequireJS, etc) have tried over the years to remediate this shortcoming.

NodeJS has been using and is currently using CommonJS (at least until it is fully supporting the ES2015 Modules).

Here are some examples:

main.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
//main.js
const add = require('./add')
const { minus } = require('./minus')
const { mult, div } = require('./other-operators')
const others = require('./other-operators')
const mult2 = require('./other-operators').mult

console.log(add(4,2))
console.log(minus(4,2))
console.log(mult(4,2))
console.log(div(4,2))
console.log(others.mult(4,2))
console.log(others.div(4,2))
console.log(mult2(4,2))

Please note the different forms that the variable assignation and require function can take.

 

The action performed by the require

When you define the files that will be imported via require, you need to keep the following template in mind:

1
2
3
4
5
6
7
8
9
//Template to keep in mind!
var module = { exports: {} };
var exports = module.exports;

// ------------------------------
// WHAT-EVER-IS-IN-YOUR-FILE
// ------------------------------

return module.exports;

Require act as if it was creating 2 objects; exports and module.exports. The variable exports is just a shortcut to module.exports and they are basically pointing to the same object. We will see later on that we may find ourselves in a bit of trouble of we miss use this shortcut.

Now that the stage is set, let’s actually look at our imported files one by one.

 

The add.js file

1
2
3
4
//add.js
module.exports = function ( first, second ) {
return first + second
}

This is the easiest form. The file exports an anonymous function.

In this case, our require in main.js:

1
2
3
//main.js
const add = require('./add')
//...

Can be interpreted like this:

1
2
3
4
5
//main.js
const add = function ( first, second ) {
return first + second
}
//...

 

The minus.js file

1
2
3
4
//minus.js
exports.minus = function ( first, second ) {
return first + second
}

Please remember that we are importing it with the following code:

1
2
3
4
//main.js
//...
const { minus } = require('./minus')
//...

Please notice the brackets surrounding our minus function. In order to understand this one, we will have to explore a new syntax provided by ES6. It’s called the destructuring assignment.

I won’t cover this syntax at length here but here’s a quick example:

1
2
3
4
const { p, q } = { a: "test", p: 42, q: true, z: "whatever" };

console.log(p); // 42
console.log(q); // tru

As you can see, it’s a quick way to extract variables and values from an object.

It’s effectively like doing this:

1
2
3
4
5
6
const o = { a: "test", p: 42, q: true, z: "whatever" };
const p = o.p;
const q = o.q;

console.log(p); // 42
console.log(q); // tru

So, basically it’s extracting variables (destructuring) the object based on its properties.

Then, when we used the following code:

1
2
3
4
// main.js
//...
const { minus } = require('./minus')
//...

And the definition in minus.js :

1
2
3
4
//minus.js
exports.minus = function ( first, second ) {
return first + second
}

Here’s how we can interpret that call:

1
2
3
4
5
6
7
8
const { minus } = { minus : function ( first, second ) {
return first + second
}}

// Which is like :
const minus = function ( first, second ) {
return first + second
}

We’ve created a variable minus pointing to the value of the minus property (which is a function) returned by the require statement.

In summary, this is why we had to use brackets {} to get our function. The add.js was returning a single function so, we could just pick it up directly into the variable add. But, in the case of the minus.js, the require was returning an object and we had to destructure the object into our minus variable.

 

The other_operators.js file

1
2
3
4
5
6
7
8
9
10
11
12
13
//other_operators.js
function multiplication (first, second) {
return first * second
}

function div (first, second) {
return first / second
}

module.exports = {
mult: multiplication,
div: div
}

We can apply pretty much the same logic than we used for destructuring the object returned by require in the minus.js file, except that we have 2 functions:

1
2
3
4
//main.js
//...
const { mult, div } = require('./other-operators')
//...

For the next line in main.js, we wanted to show that we do need to destructure the object returned by require:

1
2
3
4
//main.js
//...
const others = require('./other-operators')
//...

We just have to remember that the variable others is an object, not a function. So we have to call the mult and div function this way:

1
2
3
4
5
//main.js
//...
console.log(others.mult(4,2))
console.log(others.div(4,2))
//...

The next one is a bit ugly but I have seen this in the past…

1
2
3
4
//main.js
//...
const mult2 = require('./other-operators').mult
//...

It’s basically like getting the value of mult (function) directly from the object returned by require

1
2
const johnsname = { name: "john" }.name
console.log(johnsname) //resulting in "john"

 

Things to avoid

As we pointed out earlier, the exports variable is just a shortcut for module.exports. They are both pointing at the same object. It’s easy to get wrapped up in your code and do something like this:

1
exports = { test : function () { console.log ("ops!"); } }

The problem with this, is that you just moved the pointer of exports to a new object so module.exports and exports are no longer pointing at the same object… but the require statement will only return the object that module.exports is pointing to!

 

Lost in space?

If you become unsure of what is on your module.exports, you can always temporarily add the following code at the end of the file and run it through node.

1
console.log(process.mainModule.exports)

For example, if we add this line at the end of other-operators.js, we would get the following output:

1
{ mult: [Function: multiplication], div: [Function: div] }

Summary

In summary, just keep in mind the following pattern:

1
2
3
4
5
6
7
8
9
//Template to keep in mind!
var module = { exports: {} };
var exports = module.exports;

// ------------------------------
// WHAT-EVER-IS-IN-YOUR-FILE
// ------------------------------

return module.exports;

Also, make sure you understand the destructuring assignment.

If you are in doubt, temporary add this code to your file and run it through node:

1
console.log(process.mainModule.exports)

Finally, if you are returning an object and using the exports variable shortcut instead of the module.exports variable, make sure that you are adding properties to the object, not overriding it. And if you are just returning a function, use the module.exports. And, at last, if you just don’t want to bother, just use module.exports!