NodeJS and MSSQL Connection Pool

Shaikh Zaki Mohammed
9 min readJan 20, 2023

--

In the season of summer, the pool becomes overcrowded and needs to be managed properly; this is not the case for other seasons. On the other hand, the database connections pool needs to be managed irrespective of the seasons. So the need of the hour is to manage these connection pools in order to maintain multiple databases connections, their opening/closing, configurations, etc. In this article, we will understand how to manage NodeJS and MSSQL connection pools like a pro.

Life is and always better at poolside; so, bringing out our swim suite to dive into the pool of connections for performing database operations. The MSSQL package for NodeJS about which we have already talked a lot in my previous posts been an awesome one to jam with. We have already seen how to team up with it, perform serious procedures, and even baked a lightweight data access layer. A developer fear no man, but managing database connection pool is one of the things that scare them.

Before beginning with the code and how we can manage connection pools, let’s know more about what and why it is required at very first. The super cool MSSQL package manages the connection pool object for us; it opens the connection closes it automatically when not required and sorts out our life without even letting us know (just like your loved ones). But there is always be a but for everything going good. The MSSQL package explains the reason for customizing the pool management by your own hand for a number of reasons; if you wanna make connections to multiple databases or if you wanna create a separate pool for reading/write operations or if you wanna close your connection manually for releasing resources in a performance-intensive app. These reasons can be good enough for you to choose a tough path of taking shits in your own hand for the greater good. Hope that answers the WHY!

So enough talking and let us start exploring the MSSQL package for managing the connection pool. The package has given us a boilerplate code to start with in order to manage the number of connection pools. We will use the same boilerplate code to serve as a base for us to roll with. Let us give you a guiding path to our flow of implementation:

  1. Creating a custom Pool Manager module
  2. Setting up a data configuration file
  3. Resurrecting our Data Access Layer
  4. Globally initializing the Data Access Layer
  5. Using our Data Access Layer with custom pool

A quick glimpse to the folder structure bring smile and joy on most of the developer’s faces:

node-pool-api
|-- api
|-- employees.js
|-- users.js
|-- data-access
|-- data-access.js
|-- data-config.js
|-- pool-manager.js
|-- definitions
|-- employees.rest
|-- users.rest
|-- scripts
|-- connections.sql
|-- foo_db.sql
|-- poo_db.sql
|-- .env
|-- index.js
|-- package.json
|-- README.md

Creating a custom Pool Manager module

As per the boilerplate code provided by the package, we are introducing four functions set(), get(), close() and closeAll() to handle the pools. The pools are created as Map() objects, those can be accessed by a named key. The entire pool manager module is shown below:

data-access/pool-manager.js

const mssql = require('mssql')
const pools = new Map()

const set = ({ name, config }) => {
if (!name || !config) {
throw new Error(`Missing configuration details`)
}

const pool = new mssql.ConnectionPool(config)
const close = pool.close.bind(pool)
pool.close = (...args) => {
pools.delete(name)
return close(...args)
}
pools.set(name, pool)
}

const get = (options) => {
if (!pools.has(options.name)) {
set(options)
}
return pools.get(options.name)
}

const close = async (name) => {
const pool = pools.get(name)
if (!pool) {
throw Error(`Pool ${name} does not exist`)
}
await pool.close()
}

const closeAll = async () => {
const promises = Array.from(pools.values()).map(pool => pool.close())
await Promise.all(promises)
}

module.exports = {
get,
close,
closeAll
};

Here, we are not exposing the set() function as it is meant for internal use; at the time when there is no pool object existing while making a get call, it immediately creates one and adds it to the pools Map() object. The close() accepts the name of a pool to close it on purpose. The closeAll() function simply closes all of the existing pool objects. As per the package boilerplate code, we are binding a new functionality to the pool’s close() method, which also removes the pool object from our pool’s Map() object when a close() method is invoked. The options object basically holds 2 properties, name and config.

Setting up a data configuration file

Before creating a configuration file for the connections it is important to place our environment (.env) file, which will hold up the connection details to different data bases. The .env file is shown below:

.env

# Application
NODE_ENV=development
PORT=3000

# Databases
SQL_UK_NAME=UK
SQL_UK_DRIVER=SQL Server
SQL_UK_SERVER=DESKTOP-6AQPB1S\SQLEXPRESS
SQL_UK_DATABASE=poo_db
SQL_UK_UID=sa
SQL_UK_PWD=password_123

SQL_SIBERIA_NAME=SIBERIA
SQL_SIBERIA_DRIVER=SQL Server
SQL_SIBERIA_SERVER=DESKTOP-6AQPB1S\SQLEXPRESS
SQL_SIBERIA_DATABASE=foo_db
SQL_SIBERIA_UID=sa
SQL_SIBERIA_PWD=password_123

Here, we have mentioned even the name of the connections (like UK, Siberia).

Now, it is time to create our data configuration file which will bring you the connection details on fly from the environment variables.

data-access/data-config.js

const options = {
encrypt: false,
enableArithAbort: false
}

const getDataConfig = () => ({
uk: {
name: process.env.SQL_UK_NAME,
config: {
driver: process.env.SQL_UK_DRIVER,
server: process.env.SQL_UK_SERVER,
database: process.env.SQL_UK_DATABASE,
user: process.env.SQL_UK_UID,
password: process.env.SQL_UK_PWD,
options: options
}
},
siberia: {
name: process.env.SQL_SIBERIA_NAME,
config: {
driver: process.env.SQL_SIBERIA_DRIVER,
server: process.env.SQL_SIBERIA_SERVER,
database: process.env.SQL_SIBERIA_DATABASE,
user: process.env.SQL_SIBERIA_UID,
password: process.env.SQL_SIBERIA_PWD,
options: options
}
},
})

module.exports = getDataConfig

Here, we have created a getDataConfig() function which returns the object containing defined set of database connection objects provided with name and config properties. The reason to make getDataConfig() as a function is to provide a freedom to initialize and obtain the config object once assured that the dotenv config() function has been invoked.

Resurrecting our Data Access Layer

Elephants has a good memory and the developers who read CodeOmelet posts; just kidding, if you have already read the previous posts regarding the Data Access Layer: part 1 and part 2 then you can recognize the below-shown data access file code logic. We are simply resurrecting our old piece of code and doing some tweaks to adapt to our custom pool manager implementation. Instead of relying on the default single connection pool the data access file now makes use of pool manager instances, and rest remains the same.

data-access/data-access.js

const mssql = require('mssql')
const poolManager = require('./pool-manager')

const dataAccess = (options) => {
const run = async function (name, command, inputs = [], outputs = []) {
try {
const pool = poolManager.get(options)
await pool.connect()
const request = pool.request();
assignParams(request, inputs, outputs);
return request[name](command);
} catch (error) {
throw error
}
}

return {
query: async (command, inputs = [], outputs = []) => {
return run('query', command, inputs, outputs);
},
queryEntity: async (command, entity, outputs = []) => {
const inputs = fetchParams(entity);
return run('query', command, inputs, outputs);
},
execute: async (command, inputs = [], outputs = []) => {
return run('execute', command, inputs, outputs);
},
executeEntity: async (command, entity, outputs = []) => {
const inputs = fetchParams(entity);
return run('execute', command, inputs, outputs);
},
close: async () => {
try {
await poolManager.close(options.name)
} catch (error) {
throw error
}
}
}
}

const fetchParams = entity => {
const params = [];
for (const key in entity) {
if (entity.hasOwnProperty(key)) {
const value = entity[key];
params.push({
name: key,
value
});
}
}
return params;
};

const assignParams = (request, inputs, outputs) => {
[inputs, outputs].forEach((params, index) => {
const operation = index === 0 ? 'input' : 'output';
params.forEach(param => {
if (param.type) {
request[operation](param.name, param.type, param.value);
} else {
request[operation](param.name, param.value);
}
});
});
};

const generateTable = (columns, entities) => {
const table = new mssql.Table();

columns.forEach(column => {
if (column && typeof column === 'object' && column.name && column.type) {
if (column.hasOwnProperty('options')) {
table.columns.add(column.name, column.type, column.options);
} else {
table.columns.add(column.name, column.type);
}
}
});

entities.forEach(entity => {
table.rows.add(...columns.map(i => entity[i.name]));
});

return table;
};

module.exports = {
dataAccess,
generateTable
};

Here, notice that the data access function accepts the options (with name and config properties) which will be used to obtain the pool object from the pool manager. Additionally, we have a close() function which was not part of our data access layer previously. One can clearly smell the presence of JavaScript closures in the above code (read more about JavaScript Closures if not already aware of).

Globally initializing the Data Access Layer

We can create the data access object anywhere we want to in our application, but what will be the best place apart from creating it at the global level and accessing them throughout our app without sweating. Yes, we will initialize the DAL instance at the index.js file and use the app locals object as shown below:

index.js

const express = require('express')
const dotenv = require('dotenv')
const getDataConfig = require('./data-access/data-config')
const { dataAccess } = require('./data-access/data-access')

const app = express();

dotenv.config();

app.use(express.json());
app.use(express.urlencoded({ extended: false }));

const dataConfig = getDataConfig()

app.locals.db = {
uk: dataAccess(dataConfig.uk),
siberia: dataAccess(dataConfig.siberia)
}

app.get('/', (req, res) => {
res.send('?? Pooling with NodeJS and SQL Server');
});

app.use('/api/employees', require('./api/employees'));
app.use('/api/users', require('./api/users'));

app.listen(process.env.PORT, () => {
console.log(`Server started running on ${process.env.PORT} for ${process.env.NODE_ENV}`);
});

Here, we are initializing the data connections on app.locals object:

const dataConfig = getDataConfig()

app.locals.db = {
uk: dataAccess(dataConfig.uk),
siberia: dataAccess(dataConfig.siberia)
}

Using our Data Access Layer with custom pool

It’s time to start using our DAL; we have created an endpoint for users API, which can be seen in index.js file. The users API simply returns users record from both the database connections (UK and Siberia).

api/users.js

const express = require('express')

const router = express.Router();

router.get('/', async (req, res) => {
try {
const command = `SELECT * FROM [User] ORDER BY Id DESC`

const uk = (await req.app.locals.db.uk.query(command)).recordset;
const siberia = (await req.app.locals.db.siberia.query(command)).recordset;

res.json({ uk, siberia });
} catch (error) {
res.status(500).json(error);
}
});

module.exports = router;

Likewise we have one more API named employees, where we have 2 endpoints to fetch single and multiple records.

api/employees.js

const express = require('express')

const router = express.Router();

const dbUk = req => req.app.locals.db.uk
const dbSiberia = req => req.app.locals.db.siberia

router.get('/', async (req, res) => {
try {
const command = `SELECT * FROM Employee ORDER BY Id DESC`

const uk = (await dbUk(req).query(command)).recordset;
const siberia = (await dbSiberia(req).query(command)).recordset;

dbUk(req).close()
dbSiberia(req).close()

res.json({ uk, siberia });
} catch (error) {
res.status(500).json(error);
}
});

router.get('/:id', async (req, res) => {
try {
const command = `SELECT * FROM Employee WHERE Id = @Id`
const inputs = [
{ name: 'Id', value: req.params.id }
]

const resultUk = await dbUk(req).query(command, inputs);
const [uk = null] = resultUk.recordset;

const resultSiberia = await dbSiberia(req).query(command, inputs);
const [siberia = null] = resultSiberia.recordset;

res.json({ uk, siberia })
} catch (error) {
res.status(500).json(error);
}
});

module.exports = router;

Here, we are closing the connections to the databases on purpose after getting all of the employee records. We can even test the open connections to the databases by executing following query on the master connection of the MSSQL instance in MSSQL Management Studio.

SELECT 
DB_NAME(dbid) as DBName,
COUNT(dbid) as NumberOfConnections,
loginame as LoginName
FROM
sys.sysprocesses
WHERE
dbid > 0
GROUP BY
dbid, loginame;

After making any API call either to users or employees we can check the number of open connections to foo_db and poo_db. For employees endpoint we get both the connection closed after completion of the call due to the manual closing function call.

We can implement all of the CRUD operations and stored procedure calls with this arrangement. So the idea is to have a separate pool manager and if you already have a DAL for the single connection you can simply open it up for multiple connections, like what we did in this implementation.

Git Repository

Check out the git repository for this project or download the code.

Download Code

Git Repository

Summary

The implementation of the custom connection pool is totally custom, I mean it totally depends on the project needs and setup; the power of unopinionated. So this implementation can serve as a basis for how to achieve your own pool manager. First, you have to keep it cool, and second, you can add your own level of implementation to the pool manager code logic. It is an important aspect to control since many projects out there required to have multiple connection instances. So if you are playing along with the MSSQL package then surely you have to come up with some smart solution; hehe.

Hope this article helps.

Originally published at https://codeomelet.com.

--

--

Shaikh Zaki Mohammed
Shaikh Zaki Mohammed

Written by Shaikh Zaki Mohammed

Learner, developer, coder and an exceptional omelet lover. Knows how to flip arrays or omelet or arrays of omelet.

Responses (1)