David WORMS

CEO and Big Data Solution Architect

David Worms founded Adaltas in 2004 with the objective of sharing his knowledge and passion for innovation in the Open Source ecosystem. Since 2010, he has been supporting his clients in the definition and implementation of distributed, secure and highly available Big Data infrastructures.

He is an influent contributor to the Node.js community and he has been working lately with Kubernetes.

Published articles

Innovation, project vs product culture in Data Science

Innovation, project vs product culture in Data Science

Categories: Data Science, Data Governance | Tags: DevOps, Agile, Scrum

Data Science carries the jobs of tomorrow. It is closely linked to the understanding of the business usecases, the behaviors and the insights that will be extracted from existing data. The stakes are…

By David WORMS

Oct 8, 2019

Gatsby.js, React and GraphQL for documentation websites

Gatsby.js, React and GraphQL for documentation websites

Categories: Adaltas Summit 2018, Front End | Tags: API, Gatsby, GraphQL, HTTP, JAMstack, JavaScript, Markdown, Node.js, React.js, SEO

In the last few months, I have started to redesign some of our Open Source project websites. This includes the websites of the Node.js CSV project, the Node.js HBase client and the Nikita project, our…

By David WORMS

Apr 1, 2019

Main advantages of GraphQL as an alternative to REST

Main advantages of GraphQL as an alternative to REST

Categories: Front End | Tags: API, GraphQL, GRPC, JSON, Node.js, Registry, REST

GraphQL is based on a simple idea, moving the assembly of a request from the server to the client. The client sees the overall strongly-typed schema instead of multiple REST endpoints and he builds…

By David WORMS

Nov 27, 2018

Node.js CSV version 4 - re-writing and performance

Node.js CSV version 4 - re-writing and performance

Categories: Node.js | Tags: CLI, CSV, Data Engineering, Refactoring, Release and features

Today, we release a new major version of the Node.js CSV parser project. Version 4 is a complete re-writing of the project focusing on performance. It also comes with new functionalities as well as…

By David WORMS

Nov 19, 2018

Managing User Identities on Big Data Clusters

Managing User Identities on Big Data Clusters

Categories: Cyber Security, Data Governance | Tags: Ansible, FreeIPA, Identity, Kerberos, LDAP, Active Directory

Securing a Big Data Cluster involves integrating or deploying specific services to store users. Some users are cluster-specific when others are available across all clusters. It is not always easy to…

By David WORMS

Nov 8, 2018

One week to discuss technology in a Moroccan riad

One week to discuss technology in a Moroccan riad

Categories: Adaltas Summit 2018, Learning | Tags: Flink, Knox, CDSW, Deep Learning, Gatsby, Kubernetes, Node.js, React.js, Hadoop

Adaltas organise the year its first conference between the 22 and 26 of October. On the agenda of these 5 days of conference: discuss technology in one of the most beautiful riad of Marrakech. Mix the…

By David WORMS

Oct 11, 2018

Deploying a secured Flink cluster on Kubernetes

Deploying a secured Flink cluster on Kubernetes

Categories: Big Data | Tags: Flink, HDFS, Kafka, Elasticsearch, Encryption, Kerberos, SSL/TLS

When deploying secured Flink applications inside Kubernetes, you are faced with two choices. Assuming your Kubernetes is secure, you may rely on the underlying platform or rely on Flink native…

By David WORMS

Oct 8, 2018

Data Lake ingestion best practices

Data Lake ingestion best practices

Categories: Big Data, Data Engineering | Tags: Avro, Hive, NiFi, ORC, Spark, Data Lake, File Format, Data Governance, HDF, Operation, Protocol Buffers, Registry, Schema

Creating a Data Lake requires rigor and experience. Here are some good practices around data ingestion both for batch and stream architectures that we recommend and implement with our customers…

By David WORMS

Jun 18, 2018

Essential questions about Time Series

Essential questions about Time Series

Categories: Big Data | Tags: Druid, HBase, Hive, ORC, Elasticsearch, Graphana, IOT

Today, the bulk of Big Data is temporal. We see it in the media and among our customers: smart meters, banking transactions, smart factories, connected vehicles … IoT and Big Data go hand in hand. We…

By David WORMS

Mar 19, 2018

Publishing guidelines

Publishing guidelines

Categories: DevOps & SRE | Tags: Arch Linux, KVM, Markdown, Vagrant, VM

This is as much a set of guidelines targeting everyone publishing content on the web as rules for reviewers to ensure no validation is forgotten before submitting for publication. It mostly targets…

By David WORMS

Feb 26, 2018

Notes after Katacoda Training on Kubernetes Container Orchestration

Notes after Katacoda Training on Kubernetes Container Orchestration

Categories: Containers Orchestration, Learning | Tags: Helm, Ingress, Kubeadm, Kubernetes, CNI, Micro Services, Minikube, SSL/TLS, YAML

A few weeks ago, I dedicated two days to follow the turorials available on Katacoda, the interactive learning platform for Kubernetes or any other container orchestration platform. I’m sharing my…

By David WORMS

Dec 14, 2017

Micro Services

Micro Services

Categories: Cloud Computing, Containers Orchestration, Open Source Summit Europe 2017 | Tags: Mesos, CNCF, DNS, Encryption, GRPC, Istio, Kubernetes, Linkerd, Micro Services, MITM, Proxy, Service Mesh, SSL/TLS, SPOF

Back in the days, applications were monolithic and we could use an IP address to access a service. With virtual machines (VM), multiple hosts started to appear on the same machine with multiple apps…

By David WORMS

Nov 14, 2017

MariaDB integration with Hadoop

MariaDB integration with Hadoop

Categories: Infrastructure | Tags: Hive, Database, HA, MariaDB, Hadoop

During a workshop with one of our customers, Adaltas has identified a potential risk to use MariaDB’s High Availability (HA) strategy. Since the customer selected Cloudera’s CDH 5 distribution, the…

By David WORMS

Jul 31, 2017

Oracle DB synchrnozation to Hadoop with CDC

Oracle DB synchrnozation to Hadoop with CDC

Categories: Data Engineering | Tags: Hive, Sqoop, CDC, Data Warehouse, GoldenGate, Oracle

This note is the result of a discussion about the synchronization of data written in a database to a warehouse stored in Hadoop. Thanks to Claude Daub from GFI who wrote it and who authorizes us to…

By David WORMS

Jul 31, 2017

Hive Metastore HA with DBTokenStore: Failed to initialize master key

Hive Metastore HA with DBTokenStore: Failed to initialize master key

Categories: Big Data, DevOps & SRE | Tags: Hive, Bug, Infrastructure

This article describes my little adventure around a startup error with the Hive Metastore. It shall be reproducable with any secure installation, meaning with Kerberos, with high availability enabled…

By David WORMS

Jul 21, 2016

EclairJS - Putting a Spark in Web Apps

EclairJS - Putting a Spark in Web Apps

Categories: Data Engineering, Front End | Tags: Spark, JavaScript, Jupyter

Presentation by David Fallside from IBM, images extracted from the presentation. Introduction Web Apps development has moved from Java to NodeJS and Javascript. It provides a simple and rich…

By David WORMS

Jul 17, 2016

Hive, Calcite and Druid

Hive, Calcite and Druid

Categories: Big Data | Tags: Analytics, Druid, Hive, Database, Hadoop

BI/OLAP requires interactive visualization of complex data streams: Real time bidding events User activity streams Voice call logs Network trafic flows Firewall events Application KPIs Traditionnal…

By David WORMS

Jul 14, 2016

Red Hat Storage Gluster and its integration with Hadoop

Red Hat Storage Gluster and its integration with Hadoop

Categories: Big Data | Tags: HDFS, GlusterFS, Red Hat, Storage, Hadoop

I had the opportunity to be introduced to Red Hat Storage and Gluster in a joint presentation by Red Hat France and the company StartX. I have here recompiled my notes, at least partially. I will…

By David WORMS

Jul 3, 2015

A simple connect middleware to transpile CoffeeScript files

A simple connect middleware to transpile CoffeeScript files

Categories: Hack, Node.js | Tags: CoffeeScript, Node.js, Tools

This new module called connect-coffee-script is a Connect middleware used to serve JavaScript files written in CoffeeScript. This middleware is to be used by connect or any Connect compatible…

By David WORMS

Jul 4, 2014

Tutorial for creating and publishing a new Node.js module

Tutorial for creating and publishing a new Node.js module

Categories: Front End | Tags: CoffeeScript, GitHub, JavaScript, Learning and tutorial, License, Mocha, Node.js, NPM, Travis CI, Unit tests

In this tutorial, I provide complete instructions for creating a new Node.js module, writing the code in coffee-script, publishing it on GitHub, sharing it with other Node.js fellows through NPM…

By David WORMS

Dec 3, 2013

Crawl you website including login form with Phantomjs

Crawl you website including login form with Phantomjs

Categories: Front End | Tags: CoffeeScript, JavaScript, Mocha, Node.js, Unit tests

With PhantomJS, we start a headless WebKit and pilot it with our own scripts. Said differently, we write a script in JavaScript or CoffeeScript which controls an Internet browser and manipulates the…

By David WORMS

Nov 27, 2013

Catch 'uncaughtException' error in your mocha test

Catch 'uncaughtException' error in your mocha test

Categories: Node.js | Tags: DevOps, JavaScript, Mocha, Unit tests

This isn’t the first time I faced this situation. Today, I finally found the time and energy to look for a solution. In your mocha test, let’s say you need to test an expected “uncaughtException…

By David WORMS

Oct 27, 2013

Remote connection with SSH

Remote connection with SSH

Categories: Cyber Security | Tags: Automation, HTTP, SSH

While teaching Big Data and Hadoop, a student asks me about SSH and how to use. I’ll discuss about the protocol and the tools to benefit from it. Lately, I automate the deployment of Hadoop clusters…

By David WORMS

Oct 2, 2013

Composants for CDH and HDP

Composants for CDH and HDP

Categories: Big Data | Tags: Flume, Hive, Oozie, Sqoop, Zookeeper, Cloudera, CDH, Hortonworks, HDP, Hadoop

I was interested to compare the different components distributed by Cloudera and HortonWorks. This also gives us an idea of the versions packaged by the two distributions. At the time of this writting…

By David WORMS

Sep 22, 2013

Splitting HDFS files into multiple hive tables

Splitting HDFS files into multiple hive tables

Categories: Data Engineering | Tags: Flume, HDFS, Hive, Oozie, Pig, SQL

I am going to show how to split a CSV file stored inside HDFS as multiple Hive tables based on the content of each record. The context is simple. We are using Flume to collect logs from all over our…

By David WORMS

Sep 15, 2013

About the new BSD license and its difference with other BSD licenses

About the new BSD license and its difference with other BSD licenses

Categories: Data Governance | Tags: License, Open source

As a non restrictive Open Source license, the “new BSD license” is a commonly used license accross the Node.js community. However, this is only one of the BSD license available along the original “BSD…

By David WORMS

Aug 8, 2013

Kerberos and delegation tokens security with WebHDFS

Kerberos and delegation tokens security with WebHDFS

Categories: Cyber Security | Tags: HDFS, Big Data, HTTP, Kerberos

WebHDFS is an HTTP Rest server bundle with the latest version of Hadoop. What interests me on this article is to dig into security with the Kerberos and delegation tokens functionalities. I will cover…

By David WORMS

Jul 25, 2013

Testing the Oracle SQL Connector for Hadoop HDFS

Testing the Oracle SQL Connector for Hadoop HDFS

Categories: Data Engineering | Tags: HDFS, CDH, Database, File system, Oracle, SQL

Using Oracle SQL Connector for HDFS, you can use Oracle Database to access and analyze data residing in HDFS files or a Hive table. You can also query and join data in HDFS or a Hive table with other…

By David WORMS

Jul 15, 2013

Maven 3 behind a proxy

Maven 3 behind a proxy

Categories: Hack | Tags: Maven, Java, Proxy

Maven 3 isn’t so different to it’s previous version 2. You will migrate most of your project quite easily between the two versions. That wasn’t the case a fews years ago between versions 1 and…

By David WORMS

Jul 11, 2013

Node CSV version 0.2.7

Node CSV version 0.2.7

Categories: Hack | Tags: CoffeeScript, CSV, Node.js, Pipeline

While I’m release version 0.2.7 of the CSV parser for Node.js, I stop here to drop a few lines of what has made into this release. We are now using the latest CoffeeScript, which is version 1.4.…

By David WORMS

Jul 9, 2013

State of the Hadoop open-source ecosystem in early 2013

State of the Hadoop open-source ecosystem in early 2013

Categories: Big Data | Tags: Flume, Kafka, Mahout, Mesos, Phoenix, Pig, File Format, Hadoop

Hadoop is already a large ecosystem and my guess is that 2013 will be the year where it grows even larger. There are some pieces that we no longer need to present. ZooKeeper, hbase, Hive, Pig, Flume…

By David WORMS

Jul 8, 2013

Oracle and Hive, how data are published?

Oracle and Hive, how data are published?

Categories: Big Data | Tags: Hive, Sqoop, Data Lake, Oracle

In the past few days, I’ve published 3 related articles: a first one covering the option to integrate Oracle and Hadoop, a second one explaining how to install and use the Oracle SQL Connector with…

By David WORMS

Jul 6, 2013

Oracle to Apache Hive with the Oracle SQL Connector

Oracle to Apache Hive with the Oracle SQL Connector

Categories: Business Intelligence | Tags: HDFS, Hive, Network, Oracle

In a previous article published last week, I introduced the choices available to connect Oracle and Hadoop. In a follow up article, I covered the Oracle SQL Connector, its installation and integration…

By David WORMS

May 27, 2013

Options to connect and integrate Hadoop with Oracle

Options to connect and integrate Hadoop with Oracle

Categories: Data Engineering | Tags: Avro, HDFS, Hive, MapReduce, Sqoop, Database, Java, NoSQL, Oracle, R, RDBMS, SQL

I will list the different tools and libraries available to us developers in order to integrate Oracle and Hadoop. The Oracle SQL Connector for HDFS described below is covered in a follow up article…

By David WORMS

May 15, 2013

The state of Hadoop distributions

The state of Hadoop distributions

Categories: Big Data | Tags: Cloudera, Hortonworks, Intel, Oracle, Hadoop

Apache Hadoop is of course made available for download on its official webpage. However, downloading and installing the several components that make a Hadoop cluster is not an easy task and is a…

By David WORMS

May 11, 2013

Apache Hive Essentials How-to by Darren Lee

Apache Hive Essentials How-to by Darren Lee

Categories: Business Intelligence, Learning | Tags: Hive, File Format, UDF, Hadoop, SQL

Recently, I’ve been ask to review a new book on Apache Hive called “Apache Hive Essentials How-to” written by Darren Lee and published by Packt Publishing. To say it short, I sincerely recommend it. I…

By David WORMS

Apr 23, 2013

Hadoop development cluster of virtual machines with static IP using VirtualBox

Hadoop development cluster of virtual machines with static IP using VirtualBox

Categories: Infrastructure | Tags: Ambari, Cloudera, Hortonworks, Network, Red Hat, VirtualBox, VM, VMware

A few days ago, I explained how to set up a cluster of virtual machine with static IPsand Internet access suitable to host your Hadoop cluster locally for development. At the time I made use of VMWare…

By David WORMS

Mar 14, 2013

Definitions of machine learning algorithms present in Apache Mahout

Definitions of machine learning algorithms present in Apache Mahout

Categories: Data Science | Tags: Algorithms, Mahout, Сlassification, Clustering, Machine Learning, Hadoop

Apache Mahout is a machine learning library built for scalability. Its core algorithms for clustering, classfication and batch based collaborative filtering are implemented on top of Apache Hadoop…

By David WORMS

Mar 8, 2013

Virtual machines with static IP for your Hadoop development cluster

Virtual machines with static IP for your Hadoop development cluster

Categories: Infrastructure | Tags: Ambari, Cloudera, Hortonworks, Network, Red Hat, VirtualBox, VM, VMware

While I am about to install and test Ambari, this article is the occasion to illustrate how I set up my development environment with multiple virtual machines. Ambari, the deployment and monitoring…

By David WORMS

Feb 27, 2013

Merging multiple files in Hadoop

Merging multiple files in Hadoop

Categories: Hack | Tags: HDFS, File system, Hadoop

This is a command I used to concatenate the files stored in Hadoop HDFS matching a globing expression into a single file. It uses the “getmerge” utility of but contrary to “getmerge”, the final…

By David WORMS

Jan 12, 2013

E-commerce electronic cigarettes: first impressions with Prestashop

E-commerce electronic cigarettes: first impressions with Prestashop

Categories: Tech Radar | Tags: HTML, Java, Node.js

Last year, I had to select and integrate an e-commerce software for the website CigarHit selling electronic cigarettes. Considering that the last e-commerce integration I made dated from 2005, I took…

By David WORMS

Jul 25, 2012

Node CSV version 0.2.1

Node CSV version 0.2.1

Categories: Node.js | Tags: CoffeeScript, CSV, Release and features, Streaming

After the announcement of the version 0.2.0 of the Node.js CSV parser at the begining of october, we are releasing today a new version 0.2.1. This is mostly a bug fix release with enhanced…

By David WORMS

Jul 24, 2012

Node CSV version 0.1 and future developments

Node CSV version 0.1 and future developments

Categories: Node.js | Tags: CoffeeScript, CSV, Markdown, Release and features, Streaming

The Node CSV parser has just reach version 0.1 which close the 0.0.x releases. Started almost 2 years ago, the project has received a tremendous amount of participation in the form of bug reports…

By David WORMS

Jul 21, 2012

Convert .flac music files to .mp3 on osx

Convert .flac music files to .mp3 on osx

Categories: Hack | Tags: File Format, OS X

As an osx user for years now, one should know by then that iTunes doesn’t support the flac format. We are now in 2012, I’ve been waiting for this to happen since years know. Loosing patience, dark…

By David WORMS

Jul 20, 2012

Hadoop and R with RHadoop

Hadoop and R with RHadoop

Categories: Business Intelligence, Data Science | Tags: HBase, HDFS, MapReduce, Thrift, Data Analytics, Learning and tutorial, R, Hadoop

RHadoop is a bridge between R, a language and environment to statistically explore data sets, and Hadoop, a framework that allows for the distributed processing of large data sets across clusters of…

By David WORMS

Jul 19, 2012

Asynchronous array iteration in Node.js with Each

Asynchronous array iteration in Node.js with Each

Categories: Node.js | Tags: Asynchronous, CoffeeScript, JavaScript, Release and features

Control flow in Node.js is the sort of library for which almost all the developers have created and publish their own libraries. They usually aim at reducing spaghetti codes made of deep callbacks. I…

By David WORMS

Jul 18, 2012

Installing and using MADlib with PostgreSQL on OSX

Installing and using MADlib with PostgreSQL on OSX

Categories: Data Science | Tags: Database, Greenplum, PostgreSQL, Statistics, SQL

We cover basic installation and usage of PostgreSQL and MADlib on OSX and Ubuntu. Instructions for other environments should be similar. PostgreSQL is an Open Source database with enterprise…

By David WORMS

Jul 7, 2012

Node CSV version 0.2 with streaming API

Node CSV version 0.2 with streaming API

Categories: Node.js | Tags: CSV, Data Engineering, Markdown, Node.js, Streaming

The Node CSV parser in its version 0.2 has just been released. This version is a major enhancement as it aligned the parser with the best Node.js practice in respect of streams. The CSV parser behave…

By David WORMS

Jul 2, 2012

HDFS and Hive storage - comparing file formats and compression methods

HDFS and Hive storage - comparing file formats and compression methods

Categories: Big Data | Tags: Analytics, HBase, HDFS, Hive, ORC, Parquet, File Format

A few days ago, we have conducted a test in order to compare various Hive file formats and compression methods. Among those file formats, some are native to HDFS and apply to all Hadoop users. The…

By David WORMS

Mar 13, 2012

Two Hive UDAF to convert an aggregation to a map

Two Hive UDAF to convert an aggregation to a map

Categories: Data Engineering | Tags: Analytics, HDFS, Hive, ORC, Parquet

I am publishing two new Hive UDAF to help with maps in Apache Hive. The source code is available on GitHub in two Java classes: “UDAFToMap” and “UDAFToOrderedMap” or you can download the jar file. The…

By David WORMS

Mar 6, 2012

Java versus JS fun, a quote from the Node.js mailing list

Java versus JS fun, a quote from the Node.js mailing list

Categories: Node.js | Tags: Java, JavaScript, Node.js

I just read that one on the mailing list. I found it relevant enough to share it with those who did not subscribe to it: First Lothar Pfeiler: I still wonder, if it’s cool to have such a big…

By David WORMS

Feb 23, 2012

A fresh look at testing Node.js projects: Mocha, Should and Travis

A fresh look at testing Node.js projects: Mocha, Should and Travis

Categories: DevOps & SRE, Node.js | Tags: CI/CD, DevOps, JavaScript, Mocha, Node.js, Unit tests

Today, I finally decided to spend some time around Travis. It’s been a few weeks since that little green image on top of many GitHub homepages has been buzzing me. Well, to be totally honest, this isn…

By David WORMS

Feb 19, 2012

Coffee script, how do I debug that damn js line?

Coffee script, how do I debug that damn js line?

Categories: Hack, Node.js | Tags: CoffeeScript, Debug, JavaScript, Node.js

Update April 12th, 2012: Pull request adding error reporting to CoffeeScript with line mapping Chances are that, if you code in CoffeeScript, you often find yourself facing a JavaScript exception…

By David WORMS

Feb 15, 2012

Announcing Mecano, a set of functions for system deployment

Announcing Mecano, a set of functions for system deployment

Categories: DevOps & SRE, Node.js | Tags: Automation, CoffeeScript, DevOps, Infrastructure, JavaScript, Node.js, Open source

Update July 2016, Mecano is now renamed Nikita. We are releasing Node Mecano on GitHub which gather common functions used while deploying systems. The idea was to group those functions into a…

By David WORMS

Feb 12, 2012

OS module on steroids with the SIGAR Node binding

OS module on steroids with the SIGAR Node binding

Categories: Node.js | Tags: C++, CPU, File system, Metrics, Monitoring, Network

Today we are announcing the first release of the Node binding to the SIGAR library. Visit the project website or the source code repository on GitHub. SIGAR is a cross platform interface for gathering…

By David WORMS

Jan 11, 2012

Timeseries storage in Hadoop and Hive

Timeseries storage in Hadoop and Hive

Categories: Data Engineering | Tags: HDFS, Hive, CRM, File Format, timeseries, Tuning, Hadoop

In the next few weeks, we will be exploring the storage and analytic of a large generated dataset. This dataset is composed of CRM tables associated to one timeserie table of about 7,000 billiard rows…

By David WORMS

Jan 10, 2012

How Node CSV parser may save your weekend

How Node CSV parser may save your weekend

Categories: Hack | Tags: Bash, CSV, Hack, Node.js

Last Friday, an hour before the doors of my customer close for the weekend, a co-worker came to me. He just finished to export 9 CSV files from an Oracle database which he wanted to import into…

By David WORMS

Dec 13, 2011

Node.js is now integrated to the Microsoft Azure platform

Node.js is now integrated to the Microsoft Azure platform

Categories: Cloud Computing, Tech Radar | Tags: Cloud, Linux, Azure, Node.js

Node is now a first class citizen in the Microsoft Azure cloud environment alongside .Net, Java and PHP. This integration is the logical consequence of Microsoft’s involvement in the development of…

By David WORMS

Dec 11, 2011

Hadoop and HBase installation on OSX in pseudo-distributed mode

Hadoop and HBase installation on OSX in pseudo-distributed mode

Categories: Big Data, Learning | Tags: HBase, Big Data, Hue, Deployment, Infrastructure, Hadoop

The operating system chosen is OSX but the procedure is not so different for any Unix environment because most of the software is downloaded from the Internet, uncompressed and set manually. Only a…

By David WORMS

Dec 1, 2010

Storage and massive processing with Hadoop

Storage and massive processing with Hadoop

Categories: Big Data | Tags: HDFS, Nutch, Cloudera, Google, Hadoop

Apache Hadoop is a system for building shared storage and processing infrastructures for large volumes of data (multiple terabytes or petabytes). Hadoop clusters are used by a wide range of projects…

By David WORMS

Nov 26, 2010

Node HBase, a NodeJs client for Apache HBase

Node HBase, a NodeJs client for Apache HBase

Categories: Big Data, Node.js | Tags: HBase, Big Data, Node.js, REST

HBase is a “column familly” database from the Hadoop ecosystem built on the model of Google BigTable. HBase can accommodate very large volumes of data (tera or peta) while maintaining high…

By David WORMS

Nov 1, 2010

MapReduce introduction

MapReduce introduction

Categories: Big Data | Tags: MapReduce, Big Data, Java, JavaScript

Information systems have more and more data to store and process. Companies like Google, Facebook, Twitter and many others store astronomical amounts of information from their customers and must be…

By David WORMS

Jun 26, 2010

Node.js, JavaScript on the server side

Node.js, JavaScript on the server side

Categories: Front End, Node.js | Tags: HTTP, JavaScript, Node.js, Server

Waiting for the Next Big Language (NBL for Next Big Language), this is now 3 years or more since I predict to my customers a bright future for JavaScript as a programming language for server…

By David WORMS

Jun 12, 2010

Canada - Morocco - France

International locations

10 rue de la Kasbah
2393 Rabbat
Canada

We are a team of Open Source enthusiasts doing consulting in Big Data, Cloud, DevOps, Data Engineering, Data Science…

We provide our customers with accurate insights on how to leverage technologies to convert their use cases to projects in production, how to reduce their costs and increase the time to market.

If you enjoy reading our publications and have an interest in what we do, contact us and we will be thrilled to cooperate with you.