alwaysdata | blog Le blog pour suivre l'actualité alwaysdata Fri, 21 Sep 2018 08:41:24 +0000 en-US hourly 1 alwaysdata | blog 32 32 Teaching program for better IT courses Thu, 20 Sep 2018 15:38:11 +0000 Web architectures became more and more complex. Understanding how they work, and how to use them, is a big challenge. Our engineers started to work with Internet stuff a long time ago1), and if we had to learn everything from scratch today, we probably face an entirely different challenge. Because we had the chance to be there for a long time2), we think it’s our job to help newcomers to be able to learn in the best conditions.

x files nod GIF by The X-Files

Teaching program for teachers

Say you’re a teacher or an IT school, and you greet new students for a courses cycle. They will learn how Web applications or services work, how to develop them, how to push them in production. So you need to provide them with a production stack, meaning at least an HTTP server, probably a database, and the language you’re ready to teach them, from Node.js to Python. You really don’t want to spend an entire hour (at least) to help them to set up their environment. Nor to provide them with Virtual Machines which won’t run on everyone computers. And I don’t tell you about the fact they will need to deliver a project at the very end that you will have to test and review.

Why don’t rely on a ready-to-use solution to address the environment issue?

Here’s our plan with the teaching program. We want to make partnerships with people in charge of IT students. Using this program, teachers can provide their students with a full-provisioned environment.

To get it, just sign up for an account as a teacher, and open a ticket to ask us about the teacher plan, mentioning which course(s) and school you’re teaching for. We will update your profile to a teacher one. This profile allows you to:

  • create as many free plans as you need (one per student)
  • increase the limit of each plan to 200MB
  • use your permissions layer to give each student all rights to perform actions on its account by attaching its e-mail to
  • not get your IP blacklisted by our security layer when we will detect massive requests from all of your students

That’s all! All your students now have access to a dedicated free plan, where they can use all the features of our platform: languages, unlimited databases, as many sites as they need, etc. You, as a teacher, keep the control on their accounts and can revoke their access at any time.

This program is free of charge for teachers in public schools. Contact us if you’re a private institution to build a partnership!

And still 50% for students

We know that, as a student, we often need some hosting plan for our side-projects, portfolios, open source tools, etc. Your student or unemployed status doesn’t mean you must give up to high-quality service. That’s why we offer complete access to all our plans with a 50% discount for all students and unemployed people. Don’t hesitate to ask for it!

people you may know GIF by The Orchard Films

Notes   [ + ]

1. I personally built my first website more than 20 years ago
2. who said we’re dinos‽
]]> 0
Fall 2018 Events Tue, 18 Sep 2018 15:12:53 +0000 It’s already mid-September, which probably means you’re now back to work. It also implies it’s time for us to retake the road and greet you with a wave! More than ever, we consider as crucial to share with others our thoughts and feedback about technologies, security, and privacy.

Without further delay, here’s our already confirmed program for this fall 2018 edition:

  • Sept. 19: LTArena (Amiens, France), Privacy By Design(FR)
  • Sept. 27: La Tech Amienoise (Amiens, France), Zero Knowledge Architecture(FR)
  • Oct. 6–7: PyconFR, (Lille, France), La Crypto pour les devs(FR), Full-remote, guide de survie en environnement distant(FR)
  • Oct. 18–19: Connect.Tech (Atlanta, Georgia, USA), Crypto for devs(EN)
  • Oct. 24–25: Blend Web Mix (Lyon, France), Privacy By Design, the hard way(FR) ; here’s a quick introduction, in French:

We’re waiting some other events’ answers, like Capitole du Libre, POSS, DevDay… We expect to see you there too!

Organizing our events tour is a long-term task, that’s why we’re already planning 2019 events. If you want to see us in Quebec for the edition, please read our proposals and upvote them.

star trek hello GIF

See you there!

]]> 0
Custom logs Thu, 26 Jul 2018 11:37:58 +0000 Here’s our last blog post about new features in our reverse-proxy engine. Previously, we talked about WAF and HTTP Cache. Now it’s time to introduce you to custom logs.

Log GIF @Giphy

Upstreams logs

At alwaysdata, an upstream is an HTTP server our proxy use as a backend to serve pages to your visitors. An upstream can be a built-in HTTP server embed in your application, or a dedicated HTTP server like Apache or uWSGI.

Now, we write all output messages on standard streams1) to a dedicated file. This one is available in the ~/admin/logs/sites/ directory. Those logs allow developers and DevOps people to monitor and debug their applications running on our platform. When you rely on a custom upstream (like a Node.js service), you can get your application outputs. It allows you to find the glitch when a service refuses to start properly.

The same file hosts all written messages from all upstreams which belongs to the same alwaysdata account. Each upstream uses its PID2) to mark a line in the log file. PIDs allow you to retrieve which process (a.k.a. which upstream) currently output a line. This identifier is available between brackets after the date: [14/Jul/2018:10:04:21 +0200] [PID]. When an upstream end — e.g., when it stays idle for a long time — the given PID can be different when it restarts. Two lines are output in the log file each time an upstream wakes up. It allows you to match PID and upstream:

[14/Jul/2018:10:04:21 +0200] Upstream starting: /command/to/your/upstream ...
[14/Jul/2018:10:04:21 +0200] Upstream started PID: 12345

Access logs

Now, you can choose the name given to the access log files. To customize this entry, go to Sites → Edit → Logs.

Screenshot of website's logs customization view

You can also edit the output formats. If you need to parse your log files using a parser or a script, the custom output formats allow your log files to be suitable with your workflow. This field allows variable names between brackets {}. Their values substitute them at writing time. You can also free characters string. The syntax and variables available are documented in our logs page.

The default format is:

{request_hostname} {client_ip} {default} {default} [{completion_date:{%d/%b/%Y:%H:%M:%S %z}}] {request} {status} {response_size} {referer} {user_agent}

It returns this string: - - [16/Jul/2018:12:04:07 +0200] "GET /wp/ HTTP/1.1" 200 55380 "-" "curl/7.47.0"

To customize the output to provide the protocol, request duration and some characters strings, you may use the following syntax:

[{completion_date:{%d/%b/%Y:%H:%M:%S %z}}] protocol: {protocol} {request} duration: {duration} seconds

Which outputs:

[16/Jul/2018:12:04:07 +0200] protocol: "https" "GET /wp/ HTTP/1.1" duration: 0.134 seconds

With fully customizable access logs and an easier upstream debug, we design a more comfortable hosting platform. Monitoring and observing services on production at alwaysdata becomes painless.

This blog post is the last one about our new proxy’s features. We built our service for you and with you. Please help us again to improve it by giving us some feedback in comments to explain which features are missing for you!

jin yang handshake GIF by Silicon Valley @Giphy

Notes   [ + ]

1. stdout and stderr
2. Process IDentifier
]]> 0
HTTP Cache Wed, 25 Jul 2018 12:37:50 +0000 Here is our second article dedicated to our new reverse-proxy engine and its awesome features! After the Web Application Firewall, we now have a look to the HTTP cache built into our infrastructure.

punch it star trek GIF @Giphy

What is an HTTP cache?

A good blog post is a post with a chart

We have tested our WordPress blog performances, using our new HTTP cache built in our proxy. Here is the result, which let us bet that may like this new feature:

There’s a considerable improvement of the number of requests handled by the proxy when we enable the Cache. When we only serve 15 req/s without it, it increases to 2604 req/s. A 173 factor, for the same frame. The response time is also improved, and fall to 0.38ms instead of 63.65ms approximately. It’s interesting for a feature effortless to use!

We made this benchmark using ApacheBench, by requesting the blog homepage. We run each shoot1) four times, with and without Cache enabled, before compiling the results. Our blog stands on a dedicated server, but we expect a similar rate for shared hosting instances. You can make the test yourself by connecting to your account using SSH and running the ab command using the same options on your website.

How does it work?

A Cache is a temporary storage which can serve the cached results when requested. An HTTP Cache is a cache which can store web pages and assets. It is primarily used to decrease the charge of an upstream server when it must serve an often requested page without any modification between two requests.

When a client requests a page to a web server, this one generates an HTML response and send it to the client over the network. Before the response goes outside of the infrastructure, the HTTP Cache handles it and stores it in its memory before to let it go.

Caching a resource (schema)
Caching a resource when a new request happens (icons from The Noun Project)

When our proxy encounters the same request, it asks the Cache for an available version. If the page is available in the Cache memory, this one is served instead of asking the upstream server.

Serving a cached resource (schema)
Serving previously cached resource (icons from The Noun Project)

Use it at alwaysdata

If you want to use the HTTP Cache, you can enable it for any site individually in the Sites → Edit → Cache section. Tick the Enable the cache option.

Screenshot of the interface to enable cache for a web site

You must set the TTL for the pages served by this website. The TTL defines how many times the Cache retains the page before expiring it. You should choose it well. If we recommend a high TTL for a page that is not often modified, you must reduce it for highly dynamic content like a news website. If you set a too long duration, your visitors may see an expired page instead.

For instance, we need that any visitor sees a new homepage. When we publish a new article, the previous version of the homepage is then outdated. We then prefer to use a TTL between 5 and 10 seconds. This way, we ensure that we can use the high performances offered by the Cache with a relatively low risk of serving an old page.

This feature is currently in beta test and may evolves during the next weeks.

This feature need your application or website to be able to authorize the Cache to handle the requests. If resources aren’t explicitly marked as _cacheable_ by your app, there is a risk that our HTTP Cache can’t store them.

What’s behind the scene?

For technical people, here’s how we proceed to enable the cache. We chose to write with Python2) a module that follows the RFC 7234. A local Redis instance stores the cached resources. It allows us to manage the memory dedicated to the storage effortlessly.

We also chose to implement the HTTP PURGE verb. This method allows you to remove a cached version of a resource by calling it on its URL. Yan can then force to refresh the Cache easily.

After the performance, we made a significant effort on logging! In our next and last blog post, we introduce you to the new logging system that allows you to store custom formatted logs to allow you to debug your upstream applications effortlessly.

Notes   [ + ]

1. ab -c 10 -t 60
2. cause we ♥ Python at alwaysdata
]]> 0
Web Application Firewall (WAF) Tue, 24 Jul 2018 11:37:19 +0000 We had deployed on our production servers a new version of our HTTP reverse-proxy engine. It embeds a lot of new features. We cover them in this collection of blog posts.

Here, we want to introduce you to the Web Application Firewall (WAF) that is now built-in our reverse-proxy.

What’s a WAF?

All software have bugs. It’s even true for web applications. They may present security holes, that may compromise their integrity. Attackers may want to get a full-control over the web application. We call this kind of attack an infection. If they compromise the service by itself, the consequences may be dramatic, from a simple unavailable website to a leak of personal data.

Cybersecurity is a full-time job. By following some good practices, and by using a WAF, you may increase your security level. Good news! We now embed it directly inside our infrastructure, and using it is as simple as a click.

Gandalf GIF @Giphy

A Web Application Firewall is a firewall that protects your website from malicious requests. It parses the HTTP(s) requests and allows or deny them to access the server. It can block, alert or put in quarantine some of them if it considers it as malicious. It can also react to many attacks, to limit infections.

The request goes through the WAF to be analyzed. The firewall then decides to let the request go to the upstream server or to drop or isolate it (illustration)
The HTTP request goes through the WAF (icons from The Noun Project)

Modsecurity WAF

Instead of developing a new solution from scratch, we choose the ModSecurity WAF, developed by Trustwave SpiderLabs. This project has an excellent reputation concerning security. It’s also an open source project, so we can stick to our policy to give you a hosting platform only powered by open source solutions. Finally, the ModSecurity community is very active, and it increases the way the project evolves day-by-day.

ModSecurity is only a security engine. It uses some set of rules to analyze a request and mark it as malicious or not. We chose to use the open source ruleset from OWASP ModSecurity Core Rule Set (CRS) which offer an excellent level of protection for web applications. It also implements the OWASP Top 10 with a shallow level of false-positive.

Configuring a WAF at alwaysdata

For a high convenience, the built-in WAF is available for every website hosted at alwaysdata individually.

You find six profiles with various levels of protection:

  1. Disabled
  2. Basic
    • Force strict HTTP protocol
    • Detect malicious bots
  3. Strong
  4. Full
    • Strong profile
    • Detect attacks for PHP language
    • Detect attacks by Local File Injection (LFI)
    • Detect attacks by Remote File Injection (RFI)
  5. WordPress
    • Full profile
    • A WordPress’ specific ruleset
  6. Drupal
    • Full profile
    • A Drupal’s specific ruleset

Please note that activating your WAF may increase the latency time for every HTTP(s) request. This latency (about a few ms) increases with the robustness of the selected profile. It’s due to the parsing time of the request which increases with the number of OWASP rules to apply.

To use it, select a protection profile in the Sites → Edit → WAF section.

Screenshot of the interface allowing you to enable the WAF in your web site

This feature is in a beta state yet.

It’s our objective to give you a reliable, robust and safe environment for your hosting, without a mess of complexity. That’s why we want to give you a solid built-in WAF that you can enable with a simple click.

After security, performance! In our next blog post, we introduce you to our new HTTP cache and its impact on your websites delivery.

]]> 0
We believe in open source projects Fri, 08 Jun 2018 10:43:09 +0000 And all of a sudden, Microsoft is acquiring GitHub, infuriating the open source community in the last days. Behind the angry tweets, there’s a realization that a monopolistic situation may jeopardize the open source ecosystem. Seems about time to discuss why decentralization is necessary, and to present our initiative: the open source projects supporting.

Windy Mary Poppins GIF @Giphy

GitHub, Microsoft, Open source: why such a shitthunderstorm?

GitHub is a closed source platform based on an open source project: Git. Git is a distributed version control system. It allows developers, designers, editors, etc. to save their projects step by step on a timeline. They can keep track of what has been done, by whom, when, and they can roll-back some modifications if needed, or even start their projects again from a past-state1). While Git only brought the version control system, GitHub enriched it by adding social features like issue-tracking, documentation wikis, collaboration tools (the pull-requests), reviews, and many more.

GitHub is a SaaS2) solution, that is available in two flavors: freely for open source projects; under a fee or on-premise for private repositories in their enterprise edition. It quickly gained the attention from the open source community due to its simplicity of use, and to its “social” tools. It then grew as the central place for open source projects for the last ten years.

Maybe you’ve already noticed the paradox. Let’s start again: Git is a distributed system; GitHub has grown as a central place. The neverending war about decentralizing the Web. Year after year, because many projects use the platform, and because it’s easy, it became the place to be when you released your open source project. It gave visibility, quick and simple access to an upcoming community. It even became a way to distribute dependencies as code, many languages choosing to use it as a native backend (see Golang, Node.js, etc.)

However, a single place to host everything means you take a considerable risk if it fails. As Hubert Sablonnière said:

So it is. Or so it seemed to be for many defenders of the free and open source philosophy when Microsoft recently announced its intention to acquire GitHub. Thousands of open source contributors have been there for many years, and many of them have seen how Microsoft acted in the past. Even if Microsoft is now quite involved in open source contributions, some of them are frightened by this announcement, and start to think to what Microsoft has in mind for the future of GitHub. So far, they mainly spoke about cloud deployment integration, but what will happen to GitHub’s driven projects like Hubot, electron, Atom, etc.?.

This acquisition painfully revives the debate around decentralization, this time from the developer’s side. Maybe some projects will independently host their sources to stay away from the GAFAM and their monopoly. Do not forget that Git is decentralized by default3). GitHub is only a platform. The choice is still yours.

So, what is alwaysdata doing for open source?

We believe in open source. We based our solution on the Django framework; every third-party software we use for hosting is open source; we sometimes release some of our internal devs (see Deploy at lightning speed with Git hooks). It was time to give back to the community: this is why we offer a free 10Go plan for open source projects.

We think that many alternatives are more valuable than a single offer, even if this one is great. Earlier this year, we made a test with the Sailor framework project: as we are one of the only hosting providers to bring support for Lua natively, they contacted us to know if we were able to conclude a partnership. We then started to think about what we could do for open source. We already had a 100Mo free plan, which was a bit small for open source projects; what if we offered a 10Go plan, for free?

Open source projects, this is our contribution: if you need to find a way to host your project (repositories, websites, demos, etc.), you can do it on alwaysdata for free. We will never charge you for anything. We limit the account to host open source, active, projects only.

We don’t expect the whole open source community comes to alwaysdata4). But the world needs as many good-minded, respectful alternative to other OSS compliant hosting solution as there may be. If you want to benefit from this offer, get in touch at!

May the source be with you.

Notes   [ + ]

1. it’s a very simplistic point of view, Git is even more powerful
2. Software as a Service
3. yes, you can have multiple remotes for a repository, and if you host all the history of your project locally, you don’t have to stick to a platform. Did you know it?
4. but we welcome any project who think our offer fit its needs
]]> 2
About our community Tue, 15 May 2018 11:37:49 +0000 alwaysdata started in 2006 because we, as a web agency, hadn’t been able to found a hosting platform capable of fitting our needs. More than ten years later, and now that hosting in our core business, we’re still here with thousands of websites and applications hosted on our infrastructure, daily served to millions of users. Behind alwaysdata, there is a team of cool people that want to continue to do what they love to do: their best to provide you a sharped hosting platform. And have fun. A lot.

The IT Crowd @Gifbin

So, because our adventure could not exist without you, we decided to make more things (and more fun) with our community. Here’s a list of our first partnerships, the ones which open the way to many more projects!

Ready for space? Get your Cyberspace Building Crew seat!

Last year, Julien Dubedout, a French designer who works on several projects (like the Caliopen messaging client) had a discussion in which few people imagined some logos for Web workers, inspired by the NASA Mission Patches. Several months later, he started a crowdfunding campaign on Kickstarter for the Cyberspace Building Crew project to make those patches and stickers reality.

Unfortunately, the initial batch of designs misses the big one for us at alwaysdata: the one who represent hosting. We quickly decided to fix it! After ensuring Julien was OK to design a new patch for our job, we had unlocked a Sponsor pack to allow him to create the design for hosters. Here’s what he drew for all hosters: we’re now the Space Station for Web workers!

The cyberspace building crew patch for hosters: the space station

The pack allows us to get a bunch of those design both in patches and stickers formats. Feel free to get in touch during conferences to get one when they’re ready.

About conferences

As Web workers, we are constantly aware of new technologies, new frameworks, new solutions that will help you, developers, to build your services. We want to provide you the best support for the technologies you need on your backend. To be able to give you this support, we are learning a lot of things, and we’re doing a massive technological watch.

Giving you feedback of what we learned is essential for our team, and that’s why we are currently writing more articles here, about various technical subjects. We’re also increasing our participation in many events, as attendees and as speakers.

This isn’t good enough for us. We believe in diversity as well. So we decided to bring support to communities involved in diversity in tech. For the moment, it will consist of tickets we will offer for the conferences we are going to, starting with the upcoming and Web2Day events. We are organizing the last details to distribute those tickets. We will keep you informed on both Twitter and here, so stay tuned!

Press Newspaper GIF @Giphy

If you’re involved in communities that are engaged in diversity in tech, please get in touch in comments or at, so we could find a way to work together.

Last news: a new bunch of cool alwaysdata t-shirts is coming, try to find m4dz in tech events to get one 😉

]]> 0
GDPR at alwaysdata: what does it imply? Mon, 07 May 2018 14:20:55 +0000 Voted in 2016 by the EU Parliament, the new General Data Protection Regulation becomes enforceable on May 25, 2018. This new regulation is an essential change in European data protection law, and replace the EU Data Protection Directive (Directive 95/46/EC) as well as the local laws relating to data protection.

We, as a hosting provider, are involved in the GDPR and ensure our services are compliant with the terms of the regulation by May 2018. As we already claim, we strongly believe in privacy, and we encourage initiatives that increase the fundamental right of privacy for all citizens of the World.

To get our customers informed about what is the GDPR, and how it applies to alwaysdata services, here’s our digest about it.


The GDPR itself introduces some terms that may need some explanation. Here’s a lexicon of what is in use in the regulation, our TOS, and this article as well.

Personal Data
It defines any information related to an identified or unidentified natural person individually. It includes as well as civil data (birthdate, address, etc.) as technical data (IP, GPS coordinates, etc.)
Data Controller
The Controller is a natural or legal person, public authority, agency, or any body which determines the purposes and means of the processing of personal data. It is the one who decides what to do with the data.
Data Processor
The Processor is any body which processes the personal data on behalf of the Controller
The Data Protection Officer (DPO) is the person who, inside any company, ensure data processing operation compliance with all applicable European regulations. The DPO is entirely independent of the others company’s operations.
Any partner that, for purposes of the personal data processing, are mandated by the data processor and may have access to personal data transmitted by the processor. It must be GDPR compliant too, and the client must have been informed that the subcontractor may access its personal data.

What is the GDPR?

The General Data Protection Regulation (GDPR) is the new European privacy law that replaces any existing law about data privacy in the EU territory. It takes precedence on any local law as well as the EU Data Protection Directive. It doesn’t introduce significant changes but is intended to enhance and harmonize EU data protection laws for any EU citizen. It applies worldwide if your data is located inside the EU, or if your service manipulates EU citizen’s personal data. It becomes enforceable on May 25, 2018.

It mainly describes some topics:


Any process of personal data is prohibited unless expressly permitted.


Companies may only collect and process personal data for specific purposes. They must be clearly outlined, and the future use of the data must be documented.

Data minimization

Companies must collect as little data as possible, and as much as necessary. It also means that “blind” data collection for unspecified future purposes is prohibited.


The data processing should be understandable and comprehensible to anyone that is concerned. Companies are required to provide crystal clear information about what data they use, and for what purposes they use it.


Companies need to ensure and prove that they technically protect the personal data of their clients and employees. It means that data must be protected against unauthorized processing, alteration, theft, destruction, etc.

Who does the GDRP apply to?

The GDPR applies to any company or organization operating in the EU, or processing EU citizen’s personal data.

What role alwaysdata will endorse in the GDPR context?

alwaysdata can be considered both as a data processor and a data controller. The former because most frequently we just “processing” our customer’s data behalf of their control, as we host your apps, websites, and services; the latter because we also own information about our clients in our system, i.e., for your contracts information.

alwaysdata commitments as a processor

  • We won’t process any data without the explicit order of the data controller.
  • Data are kept inside EU; provided customers do not select a location in a geographical area outside the EU. This location may evolve, always under the controller of the customer itself.
  • We inform customers of any enlisted subcontractor which access their data, which data is concerned and for what purposes.
  • We apply security standards to protect data lifecycle.
  • We will report publicly any incident in case of data breach without undue delay.
  • We provide documentation to prove our conformity to GDPR.

alwaysdata commitments as a data controller

  • We limit the personal data collected to what is strictly necessary when you order a service, for billing or support purposes.
  • We only use personal data to what it is contractually intended.
  • We do not keep data when it’s not relevant.
  • We do not transfer data outside of EU without your explicit consent.
  • We implement appropriate technical measures to ensure a high degree of security on personal data.

alwaysdata’s security measures

We distinguish two kinds of security measures: those who concern the data stored by the customer, and the security of the infrastructures that store the information.

About data stored by our customers, the customer itself is solely responsible for its data, by ensuring the security of its service, website, application, and whatever it deploys on the alwaysdata infrastructure.

About alwaysdata’s infrastructures, we are committed to ensure optimal security. It means that physical access to the systems is strongly regulated; software are monitored, patched, and updated with security releases; and we use technical devices to prevent attacks and intrusions.

What has alwaysdata been doing to prepare for GDPR

alwaysdata was already compliant with the 95/46/EC Directive; we have built on the existing. We mainly worked on a new version of our TOS to ensure they have reflected our obligations regarding the GDPR. Technically, we have been following the Privacy by Design principles since we started, and we have never collected any information we didn’t explicitly need. We have been GDPR compliant for a while now. Time for the compliance gig!

The principles of Privacy by Design and Privacy by Default

We already have design and development processes that are Privacy by Design and Privacy by Default compliant. Privacy has been one of our main concerns for a while now; you can see a talk given by m4dz, our tech evangelist, at the last Breizhcamp 2018 edition, about Privacy by Design[fr].

Extensive information rights, and right to deletion

We already delete personal data at account deletion. We keep nothing on alwaysdata’s side, except what is needed for legal aspects, like data transactions and logs, for a limited amount of time.

The right to data portability

Our already available for a while API allows you to access all your information stored in the alwaysdata’s system. You can also ask us about your personal data by sending a mail to

No linking of consents

We do not transmit any information to any subcontractor rather than the information mentioned in our TOS for support purposes, without your explicit consent.

What must customers do?

As far as alwaysdata is concerned, you have nothing to do besides reading and accept our new TOS. We also strongly encourage our customers and partners to start preparing for the GDPR now. If you already have robust security and good data privacy practices, the shift should be simple.

You can access our new TOS here. Feel free to ask in the comments if you have any question about the EU Data Protection in alwaysdata.

]]> 0
We’ll be there: events and conferences in next may and june 2018 Fri, 04 May 2018 08:59:33 +0000 It’s been a long time since our last post. We are actively preparing new releases and articles. We’re also involved in many events and conferences we need to prepare.

To be sure none of you will miss the rendezvous, here a quick list of where we will be:

  • May 17, Sophia Antipolis: RivieraDev, La Crypto pour les devs (regular talk, FR)
  • June 1, Paris: DotScale, Automate deployment using Git, compared to other solutions (lightning talk, EN)
  • June 6, Paris: Vue.js meetup, Use Atomic Design in Vue.js components development cycle (regular talk)
  • June 8, Lille: Takeoff Conf, From ground control to the moon: choose your hosting partner (regular talk, EN)
  • June 10, Lille: DjangoCong, La Crypto pour les devs update: Déploiement continu et feature-flipping, uniquement avec Git (regular talk, FR)
  • June 15, Nantes: Web2Day, Privacy by Design (regular talk, FR)
  • June 28–29, Choisy-le-Roi: Pas Sage en Seine (PSES) Architecture Zero Knowledge et Webapps : est-ce possible ? (regular talk, FR); Full-remote : guide de survie en environnement distant (regular talk, FR) ; Privacy by design (regular talk, FR)

@m4d_z, our Tech Evangelist, will talk at the events above. Get in touch with him if you’re there!

robert downey jr handshake GIF @Giphy
]]> 0
Secured remote access, the hard way Tue, 27 Mar 2018 21:01:31 +0000 I recently told you about the choices you have to make for your sysadmin stack. In fact, these choices impact your whole technical chain, from your OS and its setup to the system architecture. It means you have to adjust the cursor between maintaining each element by yourself or relying on a partner to manage them for you.

Securing remote access goes from trivial things to less-understandable points for non-experts. Consider this article as primary guidance, and feel free to browse some links to more in-depth articles1)2).

Cat GIF @Giphy

TL;DR: security is a vast and complex realm of knowledge, and may be complex. If you only want to know how to improve your remote access setup to alwaysdata servers, read the section about using secured methods.

Don’t let anybody enter your home

You run your application on remote servers. It allows you to provide access to your service from anywhere. There’s usually no physical access for you to those machines3). So, you use remote access to those machines. Most of the time, it’s through a remote shell, something like SSH. What SSH does is to provide you access to both a shell and a low-level files-manager on your remote machine. Consider it as a door to your living room. Not taking care of it is a nonsense. Never do this at home, kids.

An important security concern is the way you manage remote access to your production servers. Here too, you may want to do it, or you may prefer to use a service that embeds a secured setup for you. At alwaysdata, we think that using a service that does it for you is often the best choice, but some parts remain for you to handle. Because this part is sometimes difficult to understand and to maintain, we want to give you some advice on how to secure your access correctly.

The right level of security is the one you’re able to understand and to manage. So, to make sure you’ve got all the expertise needed to be safe at home, let’s talk a bit about what you have to know to protect your remote access and to avoid giving anybody access. My goal is not to turn you into a security expert; it is merely to ensure you know the basics (and probably a bit more) about security and why it matters.

Fortunately, common SSH servers come with a default configuration mostly secured. OpenSSH, which is the default SSH server in most platforms, does it. But distributing a default configuration means that maintainers need to address the common use-cases rather than the high-level secured setup. This setup will probably lock out 90% of the malicious users. Think of it as adding a standard robust door to your mansion; it is better than nothing, but it is still not an armored door. We want to consider how to strengthen this default setup.

Be Safe, Be Smart

Securing its remote access implies to take care of a few digital concerns, mainly because you stay in charge of the client parts. Even if you have a properly secured server, you may get into troubles if your client-side protection is too weak. Let’s take a look at your responsibility.

Use SSH Keys to connect

We would recommend using SSH key rather than password authentication for SSH remote access. This way, you can efficiently manage who can access your remote server, on which device, and revoke access in case of emergency4). It’s a good idea to have one pair of keys5) for each user on each device. It gives you more flexibility to revoke access when a device is lost or stolen, instead of revoking the complete user access.

To know how to generate a secure, updated key, take a look at the ed25519 key format section.

Avoid password connection

Passwords can’t be secured enough to be the only protection against malicious access. You should use a key-based login as mentioned above to make sure that your remote user is protected, e.g., from a brute-force attack that will try to crack your user’s password.

In alwaysdata, the SSH connection using password is disabled by default, and only connections using SSH keys is allowed. You must have to write your public key in the ~/.ssh/authorized_keys file manually6). If you need to enable password-access nonetheless (e.g., to transfer your public key first), check the box to authorize connection using password. Please keep in mind that it’s a less-secured method, so turn it off when you don’t need it.

If you run your personal server, after copying your public key on the server, disable password-based login in sshd configuration.

# /etc/ssh/sshd_config

AuthenticationMethods public key
PubkeyAuthentication yes
ChallengeResponseAuthentication no
PasswordAuthentication no
UsePAM no

It also seems that some users/administrators think that not protecting their accounts with a password is a good idea. Personally, I think that even the weakest password is better than no password at all. In any case, you must avoid allowing those no-password users to log in remotely without any protection.

# /etc/ssh/sshd_config

PermitEmptyPasswords no

We forbid empty passwords on alwaysdata servers.

Use strong passphrases

Using strong keys is essential, but you can’t rely on them without securing them. It means that each SSH key must be passphrase-protected with a unique, dedicated, reliable password. It also applies to your user’s login access, so feel free to use the same method to set up your user’s password.

I have two methods to generate random passphrases in CLI7):

$ openssl rand -base64 32
$ gpg --gen-random --armor 1 32

You should have openssl rand or gpg (or both) installed on your system. Make sure to check your password resilience against brute-force attacks with the How Secure Is My Password tool. You don’t want a weak password to compromise your whole business security.

Use cryptographic hardware

If you’re concerned about secure remote access, you should consider using certificate client authentication coupled with a hardware device, so you keep it safe from attackers and unaccessible from your computer. You can then use some cryptographic hardware device like Nitrokey Pro or YubiKey 4 to store your keys/certificates and manage the client authentication for you.


Stay up-to-date

Keep your SSH server updated

SSH security breaches are regularly discovered, and patched quickly by the maintainers and contributors of the projects. Keeping your server updated is necessary. It also implies you keep your SSH client up-to-date too. As I’m writing this blog post, the current OpenSSH version is 7.6, released Oct. 3, 2017. Check your current version:

$ ssh -V
OpenSSH_7.6p1, OpenSSL 1.1.0g  2 Nov 2017

Use SSH v2 protocol

SSH comes in two versions of its protocol. The current, modern one, is the v2. It is more secure, more flexible, and even more-encrypted in all steps of the process. Almost every client has supported the v2 for a long time, so feel free to disable v1 and only serve the latest if you run your server.

# /etc/ssh/sshd_config

Protocol 2    # defaults to "2,1"

For legacy reasons, the v1 was still available on alwaysdata servers. Since the release of OpenSSH 7.6, the v1 protocol is now removed from the codebase (unavailable even in compilation flags), so stay up to date.

Rely on strong SSH Key with ED25519

You can use several key formats for your SSH Keys. Preferably, you should use the ED25519 algorithm, as it is the best choice nowadays regarding security. Yes, it’s relatively new, but it’s well supported in production right now, and as you manage the server side (or rely on a trusted, dedicated partner), compatibility may not be a big deal.

$ ssh-keygen -a 1000 -t ed25519 -C

It will:
— generate a new key
-a set the number of rounds for the function derivation to 1000, which increases security but also the login time. If a quick login is critical to you, consider decreasing this value (the default is 16; 100 offers robust protection against brute-force attacks)
-t force the format to use the ed25519 algorithm
-C identify your key with your username/device

Network protection

This section contains advanced tips for a sysadmin people that want to highly secure their environment. They may or may not apply, depending on your context and your threat models. In fact, the former sections cover most use-cases of remote access. But you may want to reinforce your security.

Firewall the SSH TCP Port

If you always connect from a known set of dedicated IPs to your servers, so it’s useless (and dangerous) to let any incoming connection try to log in. You should restrict SSH login to known addresses in your firewall rules.

$ iptables -A INPUT -p tcp -s -d --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
$ iptables -A OUTPUT -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Here, we allow connections from the IP (you have to use your own, obviously) to the SSH port 22 in TCP only on the local machine address ( using iptables8). Firewalling can be hard, but it’s one of your stronger defenses against attackers.

Some of our machines, which handle parts of alwaysdata architecture not aimed to be exposed publicly are IP filtered like that. If you’ve got a VPS or Dedicated Server offer, you can activate IP Filtering in your admin panel to ensure only your IPs are available to connect to your servers.

Blacklist SSH crackers and brute-force attackers

One of the problems with attackers trying to get access to your machine is that they will request access a huge number of times. Even if they don’t get access, it will overload your SSH server, maybe slow down your system, and perhaps result in DDoS. To prevent that, you can use your firewall to ban the attackers’ IPs. Indeed, you won’t blacklist them manually. Instead, you can rely on tools like fail2ban, DenyHosts, or a custom rule in iptables or ufw to temporarily ban them.

In alwaysdata architecture, SSH servers are protected by a fail2ban filter.

Change SSH port and limit port binding

OpenSSH listens to port 22 by default. It makes it vulnerable to remote machine attacks because attackers will first try this port to target SSH breaches. If you don’t care about compatibility and use machines you’re comfortable with, don’t hesitate to change the default port and the binding addresses to which the servers are listening to to decrease the attack surface.

# /etc/ssh/sshd_config

Port 2002

Port knocking

This is one of the most fun tips.

It’s a method that forces remote users to try to connect to a series of ports to trigger a rule in your firewall that will open the access for its IP to the SSH port. By default, the firewall rejects connections to the SSH port. Using a dedicated tool (knock), you will contact your server to a sequence of ports. If the series is correct, the firewall is updated, and a new rule is added to allow only your client IP to connect through SSH. Simple, smart, efficient9)). It uses the knockd tool.


Here’s our last tips section. Congrats if you read the article so far! It presents some extra setup available to increase a little bit your security. It’s often useless, but it’s a good know-how.

Trust SSH Host Keys

To be trusted by clients, SSH server presents a key that allows you to authenticate it. This key is generated automatically and belongs to the server only. There’s one key per supported algorithm, and they are presented to the client depending on the setup. You can compare this key at your first connection with the one provided in your SSH admin section.

As modern clients support the ED25519 algorithm, your server can serve only this one and disable others to force its use.

# /etc/ssh/sshd_config

#HostKey /etc/ssh/ssh_host_dsa_key
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key

In alwaysdata setup, we still keep all formats available for compatibility reasons.

Disable root access

Your root user is the super admin user in your *nix system. It is the one allowed to do everything on the machine, even wipe it entirely10). So, it should be used _carefully_. Most of the time, you do _not_ need to be rooted to do what you want, and your system permissions must allow you to execute your commands safely. So, it is a good idea to disable the root user access remotely11). This way, even if a malicious user gets access to the server, that user won’t be able to gain root privileges easily.

# /etc/ssh/sshd_config

PermitRootLogin no

Idle log out timeout

It can be a good idea to limit the time a user can be maintained in your system without any activity, to prevent later malicious use. Decrease the number of idle connection allowed and the inactivity time to make sure nobody can stay longer than needed.

# /etc/ssh/sshd_config

ClientAliveInterval 300
ClientAliveCountMax 0

Chroot users

Users logged in remotely can access the whole system, sandboxed by the system and files permissions. Chroot users in their home directory could be a good idea. Unfortunately, it’s not as trivial as it seems, and it may require relying on tools like rssh. It may be a complicated setup, but if your threat model implies a risk of data leakage, it’s a good idea.

wonder woman shield GIF @Giphy

I hope this long blog post allowed you to understand a little bit more about what matters regarding remote access security. As it relies on cryptographic strengths, it may be hard to manage the whole security of your stack by yourself. Pieces of advice here are what they are: safety tips to increase your security. Please never underestimate how hard it is to keep a whole system safe, and never begrudge working with competent partners who are there to help you. One of our job at alwaysdata is to do it for you. We ensure the cursor is correctly adjusted to give you flexibility without compromising security.

If you want to talk a bit about security, I’ll be at Rennes next Mar. 29, 2018 for the Breizhcamp conference to talk about cryptography and development. See you there!

Notes   [ + ]

1. duckduckgoing/qwanting the missing resources is also a great way to get more information
2. feel free to ask questions in the comments 🙂
3. and you genuinely don’t want to run across the world to connect and repair your systems
4. or when your collaborators quit
5. public and private
6. we’ll soon provide a solution in your admin panel to fill your public key right there
7. store them securely in a password manager, of course
8. you can also take a look at ufw to write your firewalling rules
9. because we can :
10. never, ever, rm -rf / blindly
11. it is also a good idea to disable it locally and always use sudo to execute root commands
]]> 2