diff --git a/content/english/blog/3scale-toolbox-anonymous-policy.md b/content/english/blog/3scale-toolbox-anonymous-policy.md index 257fe15..3a7e7eb 100644 --- a/content/english/blog/3scale-toolbox-anonymous-policy.md +++ b/content/english/blog/3scale-toolbox-anonymous-policy.md @@ -12,6 +12,8 @@ If you tried this approach by yourself you may end up, *sooner or later*, with a What is this policy and why is it there? Let's dig in! + + In a nutshell, the *Anonymous* policy instruct the *APIcast* gateway to expose an API **without any security mechanism**. Given how we stress out the importance of security in our very fragile IT systems, this calls out the following question: why was it there in the first place? diff --git a/content/english/blog/3scale-toolbox-url-rewriting-policy.md b/content/english/blog/3scale-toolbox-url-rewriting-policy.md index 30165bf..b219b42 100644 --- a/content/english/blog/3scale-toolbox-url-rewriting-policy.md +++ b/content/english/blog/3scale-toolbox-url-rewriting-policy.md @@ -10,6 +10,8 @@ topics: In this article on the Red Hat Developer blog, I explained [how to deploy an API from a Jenkins Pipeline, using the 3scale toolbox](https://developers.redhat.com/blog/2019/07/30/deploy-your-api-from-a-jenkins-pipeline/). If you tried this approach by yourself you may have noticed that in some cases, the configured service includes the *URL Rewriting* policy in its *Policy Chain*. + + The *URL Rewriting* policy can be used for a variety of use cases but, in a nutshell, the *URL Rewriting* policy is used by the toolbox to change the *Base Path* of an API. For instance, if your actual API implementation is live at **/camel/my-route** but you wish to expose it on **/api/v1**, you can instruct the 3scale toolbox to configure the *URL Rewriting* policy for you by specifying the `--override-private-basepath` and `--override-public-basepath` options. diff --git a/content/english/blog/ansible-add-prefix-suffix-to-list.md b/content/english/blog/ansible-add-prefix-suffix-to-list.md index 24aa6ff..8e3e6cb 100644 --- a/content/english/blog/ansible-add-prefix-suffix-to-list.md +++ b/content/english/blog/ansible-add-prefix-suffix-to-list.md @@ -9,6 +9,8 @@ topics: Recently, in [one of my Ansible playbooks](../airgap-openshift-installation-move-registry-created-using-oc-adm-release-mirror-between-environments) I had to prefix all items of a list with a chosen string. + + Namely, from the following list: ```python diff --git a/content/english/blog/check-ansible-version-number-playbook.md b/content/english/blog/check-ansible-version-number-playbook.md index 395f617..e557447 100644 --- a/content/english/blog/check-ansible-version-number-playbook.md +++ b/content/english/blog/check-ansible-version-number-playbook.md @@ -11,6 +11,8 @@ My Ansible playbooks sometimes use features that are available only in a very re To prevent unecessary troubles to the team mates that will execute them, I like to add a task at the very beginning of my playbooks to check the Ansible version number and abort if the requirements are not met. + + ```yaml - name: Verify that Ansible version is >= 2.4.6 assert: diff --git a/content/english/blog/cleanup-playbook-3scale.md b/content/english/blog/cleanup-playbook-3scale.md index cc10cd1..a1a4682 100644 --- a/content/english/blog/cleanup-playbook-3scale.md +++ b/content/english/blog/cleanup-playbook-3scale.md @@ -14,6 +14,8 @@ And with the new feature named *API-as-a-Product*, there are now **Backends and This article explains how to cleanup a 3scale tenant using Ansible. + + ## Pre-requisites Make sure Ansible is installed locally and is a fairly recent version. diff --git a/content/english/blog/cli-world-clock.md b/content/english/blog/cli-world-clock.md index f448d85..e633e3c 100644 --- a/content/english/blog/cli-world-clock.md +++ b/content/english/blog/cli-world-clock.md @@ -10,6 +10,8 @@ requires me to leave my terminal. Let's meet the CLI World clock! + + ```sh function t() { for tz in Europe/Paris Europe/Dublin US/Eastern US/Central US/Pacific; do diff --git a/content/english/blog/configure-redhat-sso-3scale-cli.md b/content/english/blog/configure-redhat-sso-3scale-cli.md index ffef4e2..721b5b1 100644 --- a/content/english/blog/configure-redhat-sso-3scale-cli.md +++ b/content/english/blog/configure-redhat-sso-3scale-cli.md @@ -12,6 +12,8 @@ topics: The [official documentation](https://access.redhat.com/documentation/en-us/red_hat_3scale_api_management/2.8/html/administering_the_api_gateway/openid-connect#configure_red_hat_single_sign_on) describes the steps to configure Red Hat SSO / Keycloak but it uses the Graphical User Interface, which can be tedious if you have multiple environments to configure. Let's configure Red Hat SSO for 3scale using the CLI! + + As a pre-requisite, install [jq](https://stedolan.github.io/jq/download/). Fetch the hostname, login and password of your Red Hat SSO instance, from your OpenShift environment. diff --git a/content/english/blog/configure-truststore-apicurio-studio.md b/content/english/blog/configure-truststore-apicurio-studio.md index 547ce81..5f9d945 100644 --- a/content/english/blog/configure-truststore-apicurio-studio.md +++ b/content/english/blog/configure-truststore-apicurio-studio.md @@ -13,6 +13,8 @@ topics: Unfortunately, sometimes TLS certificates can get in the way of proper communication between the two projects. This post explains how to configure the trust store in Apicurio to overcome TLS communication issues between Apicurio and Microcks. + + Start by gathering the CA certificates used in your company. There can be several ones. You can then create a trust store by running this command for each CA certificate to import: diff --git a/content/english/blog/deploying-invidious-openshift.md b/content/english/blog/deploying-invidious-openshift.md index 077e345..09c002c 100644 --- a/content/english/blog/deploying-invidious-openshift.md +++ b/content/english/blog/deploying-invidious-openshift.md @@ -12,6 +12,8 @@ topics: There is a hosted instance at [invidio.us](https://invidio.us/) if you want to give it a try. But, wouldn't it be cooler to host your own instance on your OpenShift cluster? Let's do it! + + Create a new project. ```sh diff --git a/content/english/blog/deploying-miniflux-openshift.md b/content/english/blog/deploying-miniflux-openshift.md index 017ed8c..6883c10 100644 --- a/content/english/blog/deploying-miniflux-openshift.md +++ b/content/english/blog/deploying-miniflux-openshift.md @@ -10,6 +10,8 @@ topics: [Miniflux](https://miniflux.app) is a minimalist, open source and opinionated RSS feed reader. There is a [hosted instance](https://miniflux.app/hosting.html) available at a fair price point but wouldn't it be cooler to host your own instance on your OpenShift cluster? Let's do it! + + Create a new project. ```sh diff --git a/content/english/blog/enable-global-policies-apicast.md b/content/english/blog/enable-global-policies-apicast.md index 21d534b..9615498 100644 --- a/content/english/blog/enable-global-policies-apicast.md +++ b/content/english/blog/enable-global-policies-apicast.md @@ -12,6 +12,9 @@ This is very powerful since each service receives its specific configuration. However, if the same treatment has to be applied to every service exposed, it becomes an administration overhead. Hopefully, Apicast has the concept of *Global Policies* that applies to every service exposed by itself. + + + An example of a widespread policy, especially during demos, is the CORS policy to allow the API Developer Portal to query the API Gateway directly. To configure the *Global Policy Chain*, you will have to provide a custom *Environment file*. diff --git a/content/english/blog/feed-url-drupal-wordpress-wix-youtube.md b/content/english/blog/feed-url-drupal-wordpress-wix-youtube.md index 88c9c24..9c19ab8 100644 --- a/content/english/blog/feed-url-drupal-wordpress-wix-youtube.md +++ b/content/english/blog/feed-url-drupal-wordpress-wix-youtube.md @@ -7,6 +7,8 @@ If like me you are using [an RSS reader](../deploying-miniflux-openshift/) to st But since most website are based on commonly found CMS, it is highly probable the RSS feeds are there, just not advertised. + + Here are the URL patterns for the most common CMS on the market: - **Wordpress**: `/feed/` or `/?feed=rss2` diff --git a/content/english/blog/install-miniflux-raspberry-pi.md b/content/english/blog/install-miniflux-raspberry-pi.md index 50d57a0..7912ad6 100644 --- a/content/english/blog/install-miniflux-raspberry-pi.md +++ b/content/english/blog/install-miniflux-raspberry-pi.md @@ -11,6 +11,8 @@ In the article "[Nginx with TLS on OpenWRT](../nginx-with-tls-on-openwrt/)", I e But without an application to protect, Nginx is quite useless. This article explains how to install [Miniflux](https://miniflux.app/) (a lightweight RSS reader) on your Raspberry PI and how to host it as an Nginx virtual host. + + Miniflux is a web application written in Go and backed by a PostgreSQL database. So we will need to install PostgreSQL, install miniflux and setup Nginx. The rest of this article assumes you [installed OpenWRT on your Raspberry](../install-openwrt-raspberry-pi/), but it should be applicable to any Linux distribution with minimal changes. ## Install PostgreSQL diff --git a/content/english/blog/install-openwrt-raspberry-pi.md b/content/english/blog/install-openwrt-raspberry-pi.md index 0824f16..a64270c 100644 --- a/content/english/blog/install-openwrt-raspberry-pi.md +++ b/content/english/blog/install-openwrt-raspberry-pi.md @@ -11,6 +11,8 @@ topics: It made design choices that take it apart from the usual Linux distributions: musl libc instead of the usual glibc, busybox instead of coreutils, ash instead of bash, etc. As a result, the system is very light and blazing fast! + + Continue reading to know how to **install OpenWRT on your Raspberry PI**. ## Install OpenWRT diff --git a/content/english/blog/install-operator-openshift-cli.md b/content/english/blog/install-operator-openshift-cli.md index a90b254..197c14d 100644 --- a/content/english/blog/install-operator-openshift-cli.md +++ b/content/english/blog/install-operator-openshift-cli.md @@ -15,6 +15,8 @@ Most software now provide an operator and describe how to use it. Nevertheless, almost every software documentation I read so far, includes the steps to install the operator using the nice GUI of OpenShift 4. But since my OpenShift environments are provisioned by a playbook, I want to be able to install operators using the CLI only! + + The [OpenShift official documentation](https://docs.openshift.com/container-platform/4.3/operators/olm-adding-operators-to-cluster.html#olm-installing-operator-from-operatorhub-using-cli_olm-adding-operators-to-a-cluster) covers this part but I did not find it very clear. So, this article tries to make it clearer: **how to install Kubernetes operators in OpenShift using only the CLI**. diff --git a/content/english/blog/is-my-ntp-daemon-working.md b/content/english/blog/is-my-ntp-daemon-working.md index 9651bb0..b4b7884 100644 --- a/content/english/blog/is-my-ntp-daemon-working.md +++ b/content/english/blog/is-my-ntp-daemon-working.md @@ -17,6 +17,8 @@ This can happen when your [NTP](https://en.wikipedia.org/wiki/Network_Time_Proto daemon is not synchronized. This means it cannot reliably determine the current time. + + First, make sure your NTP daemon is started: ```raw diff --git a/content/english/blog/jmeter-assess-software-performances.md b/content/english/blog/jmeter-assess-software-performances.md index d3b52d1..d7cc5b7 100644 --- a/content/english/blog/jmeter-assess-software-performances.md +++ b/content/english/blog/jmeter-assess-software-performances.md @@ -14,6 +14,8 @@ I could have jumped into the code and changed something, hoping it will improve But that would be ineffective and unprofessional. So, I decided to have an honest measure of the current performances as well as a reproducible setup to have consistent measures over time. + + This article explains how I built my performance testing lab using [JMeter](https://jmeter.apache.org/index.html) and an old ARM board. To keep this article short and readable, I focused on the assessment of two HTTP libraries (golang's net/http and valyala's fasthttp), leaving the discussion about the Telegram Photo Bot performances for a next article. diff --git a/content/english/blog/nginx-with-tls-on-openwrt.md b/content/english/blog/nginx-with-tls-on-openwrt.md index 637d645..ed2b1f9 100644 --- a/content/english/blog/nginx-with-tls-on-openwrt.md +++ b/content/english/blog/nginx-with-tls-on-openwrt.md @@ -11,6 +11,8 @@ topics: In the article "[Install OpenWRT on your Raspberry PI](../install-openwrt-raspberry-pi/)", I explained how to install OpenWRT on a Raspberry PI and the first steps as an OpenWRT user. As I plan to use my Raspberry PI to host plenty of web applications, I wanted to setup a versatile reverse proxy to protect them all, along with TLS support to meet nowadays security requirements. + + OpenWRT has an [nginx package](https://openwrt.org/packages/pkgdata/nginx), ready to be installed using *opkg* but unfortunately it does not have TLS enabled. So we need to recompile nginx with TLS enabled! ## Install the OpenWRT SDK diff --git a/content/english/blog/print-config-file-without-comments.md b/content/english/blog/print-config-file-without-comments.md index 609a8a3..42b47f7 100644 --- a/content/english/blog/print-config-file-without-comments.md +++ b/content/english/blog/print-config-file-without-comments.md @@ -9,6 +9,8 @@ Sounds familiar? Not that comments are useless in a configuration file but sometimes it's handy to print a configuration file without the comment lines. Especially when the file is thousand lines long but the useful lines fit the twenty five lines of a standard terminal. + + The `egrep` command which is standard on most Linux distributions and on MacOS, can strip out the unwanted lines: ```sh diff --git a/content/english/blog/running-redhat-sso-outside-openshift.md b/content/english/blog/running-redhat-sso-outside-openshift.md index 568a88c..cbefa9b 100644 --- a/content/english/blog/running-redhat-sso-outside-openshift.md +++ b/content/english/blog/running-redhat-sso-outside-openshift.md @@ -11,6 +11,8 @@ In an article named [Red Hat Single Sign-On: Give it a try for no cost!](https:/ As pointed by a reader in a comment, as widespread OpenShift can be, not everyone has access to a running OpenShift cluster. So, here is how to run Red Hat SSO outside of OpenShift: using only plain Docker commands. + + The rest of this procedure assumes you already have a token to access the Red Hat registry (full procedure described in [my article](https://developers.redhat.com/blog/2019/02/07/red-hat-single-sign-on-give-it-a-try-for-no-cost/) and in the [Red Hat SSO Getting Started guide, chapter 3, section 3.1](https://access.redhat.com/documentation/en-us/red_hat_single_sign-on/7.3/html/red_hat_single_sign-on_for_openshift/get_started)). Start by logging in with this token using the *docker login* command (do not forget to replace the login and password with yours): diff --git a/content/english/blog/secure-openshift-4-openid-connect-authentication.md b/content/english/blog/secure-openshift-4-openid-connect-authentication.md index f05a4d6..014c4da 100644 --- a/content/english/blog/secure-openshift-4-openid-connect-authentication.md +++ b/content/english/blog/secure-openshift-4-openid-connect-authentication.md @@ -14,6 +14,8 @@ But this is yet another password to remember! OpenShift can handle the [OpenID Connect](https://openid.net/connect/) protocol and thus offers Single Sign On to its users. No additional password to remember: you can login to the OpenShift console with your [Google Account](../use-google-account-openid-connect-provider) for instance. + + ## Pre-requisites The rest of this article assumes you have already setup your OpenID Connect client in the Google Developer Console as explained in this article: [Use your Google Account as an OpenID Connect provider](../use-google-account-openid-connect-provider). diff --git a/content/english/blog/secure-quarkus-api-with-keycloak.md b/content/english/blog/secure-quarkus-api-with-keycloak.md index 3c5dac8..d8d246e 100644 --- a/content/english/blog/secure-quarkus-api-with-keycloak.md +++ b/content/english/blog/secure-quarkus-api-with-keycloak.md @@ -14,6 +14,8 @@ Quarkus can be used for any type of backend development, including API-enabled b In this article, I'm describing how to secure a Quarkus API with Keycloak using JWT tokens. + + ## Preparation As a pre-requisite, install [Maven](https://maven.apache.org/), [jq](https://stedolan.github.io/jq/download/) and [jwt-cli](https://github.com/mike-engel/jwt-cli#installation). diff --git a/content/english/blog/secure-raspberry-pi-keycloak-gatekeeper.md b/content/english/blog/secure-raspberry-pi-keycloak-gatekeeper.md index dc46910..91beaa4 100644 --- a/content/english/blog/secure-raspberry-pi-keycloak-gatekeeper.md +++ b/content/english/blog/secure-raspberry-pi-keycloak-gatekeeper.md @@ -13,6 +13,8 @@ Some of the web applications that I installed on my Raspberry PI do not feature No authentication means that anybody on the internet could reach those applications and play with them. This article explains how to secure applications running on a Raspberry PI with [Keycloak Gatekeeper](https://github.com/keycloak/keycloak-gatekeeper). + + [Keycloak Gatekeeper](https://github.com/keycloak/keycloak-gatekeeper) is a reverse proxy whose sole purpose is to authenticate the end-users using the [OpenID Connect](https://openid.net/connect/) protocol. If Keycloak Gatekeeper is best used in conjunction with the [Keycloak Identity Provider](https://www.keycloak.org/), it can also be used with any Identity Provider that conforms to the OpenID Connect specifications. diff --git a/content/english/blog/send-mails-openwrt-msmtp-gmail.md b/content/english/blog/send-mails-openwrt-msmtp-gmail.md index 6c55341..ea4552e 100644 --- a/content/english/blog/send-mails-openwrt-msmtp-gmail.md +++ b/content/english/blog/send-mails-openwrt-msmtp-gmail.md @@ -13,6 +13,9 @@ With great power comes great responsibilities. So, you might want to be notified when something goes wrong, a cron job failed, a hard disk is dying, etc., so that you can fix the problem at earliest, maybe before anyone else could notice. This article explains how to send mails on OpenWRT with MSMTP and a GMail account. + + + You can adapt this procedure to any email provider that supports SMTP access with a login and password. ## Configure GMail diff --git a/content/english/blog/testing-hard-drive-ssd-performance.md b/content/english/blog/testing-hard-drive-ssd-performance.md index 980f5f9..567dee4 100644 --- a/content/english/blog/testing-hard-drive-ssd-performance.md +++ b/content/english/blog/testing-hard-drive-ssd-performance.md @@ -9,6 +9,8 @@ If your Linux system appears to be slow, it might be an issue with your disks, either hard drive or SSD. Hopefully, with a few commands you can get an idea of the performances of your disks. + + First, you will have to install `hdparm` using `yum` or `dnf`: ```sh diff --git a/content/english/blog/use-ansible-to-manage-the-qos-of-your-openshift-workload.md b/content/english/blog/use-ansible-to-manage-the-qos-of-your-openshift-workload.md index 2dcbf91..684ee0b 100644 --- a/content/english/blog/use-ansible-to-manage-the-qos-of-your-openshift-workload.md +++ b/content/english/blog/use-ansible-to-manage-the-qos-of-your-openshift-workload.md @@ -12,6 +12,8 @@ As I was administering my OpenShift cluster, I found out that I had a too much memory requests. To preserve a good quality of service on my cluster, I had to tacle this issue. + + Resource requests and limits in OpenShift (and Kubernetes in general) are the concepts that helps define the quality of service of every running Pod. diff --git a/content/english/blog/use-google-account-openid-connect-provider.md b/content/english/blog/use-google-account-openid-connect-provider.md index 297d869..efb31c7 100644 --- a/content/english/blog/use-google-account-openid-connect-provider.md +++ b/content/english/blog/use-google-account-openid-connect-provider.md @@ -10,6 +10,8 @@ Unless you have a password vault to store your credentials securely, it is very This article goes through all the steps to use your Google Account as an [OpenID Connect](https://openid.net/connect/) provider and subsequent articles (check links at the bottom of this article) explain how to configure the different services and software to use your Google Account as an OpenID Connect provider. + + The article is divided in three parts. * a general overview of OpenID Connect protocol diff --git a/content/english/blog/use-qlkube-to-query-the-kubernetes-api.md b/content/english/blog/use-qlkube-to-query-the-kubernetes-api.md index ff1ed66..29f54cb 100644 --- a/content/english/blog/use-qlkube-to-query-the-kubernetes-api.md +++ b/content/english/blog/use-qlkube-to-query-the-kubernetes-api.md @@ -12,6 +12,8 @@ topics: It strives to reduce the chattiness clients can experience when querying REST APIs. It is very useful for mobile application and web development: by reducing the number of roundtrips needed to fetch the relevant data and by fetching only the needed field, the network usage is greatly reduced. + + To install QLKube in OpenShift, use the NodeJS Source-to-Image builder: {{< highlight sh >}} diff --git a/content/english/blog/writing-workshop-instructions-with-hugo-deploy-openshift.md b/content/english/blog/writing-workshop-instructions-with-hugo-deploy-openshift.md index f499784..b8086b9 100644 --- a/content/english/blog/writing-workshop-instructions-with-hugo-deploy-openshift.md +++ b/content/english/blog/writing-workshop-instructions-with-hugo-deploy-openshift.md @@ -9,6 +9,8 @@ opensource: This is the third part of my series covering how to [Write workshop instructions with Hugo](../writing-workshop-instructions-with-hugo/). In this article, we will deploy our [Hugo mini-training](https://github.com/nmasse-itix/hugo-workshop/) as a container in OpenShift. + + Since Hugo is a static website generator, we only need a web server in our container to serve those pages. Let's settle for nginx that is [neatly packaged as a container image, as part of the Software Collections](https://www.softwarecollections.org/en/scls/rhscl/rh-nginx114/). And to build our final container image that will contain both our website (the static pages to serve) and the web server itself, we will use the [Source-to-image (S2I)](https://github.com/openshift/source-to-image) tool. Hopefully, the nginx image of the Software Collections is already S2I enabled! diff --git a/content/english/blog/writing-workshop-instructions-with-hugo-variables.md b/content/english/blog/writing-workshop-instructions-with-hugo-variables.md index 13972e0..c6bd9f5 100644 --- a/content/english/blog/writing-workshop-instructions-with-hugo-variables.md +++ b/content/english/blog/writing-workshop-instructions-with-hugo-variables.md @@ -16,6 +16,8 @@ In the first part, we saw how to: For this second part, we will add variables to our content so that we can easily adjust the workshop instructions to different use cases. + + One of the most common usage we have for variables is to deliver the same workshop on different environments. This means URLs, username and passwords change and we need to adjust our workshop instructions very quicky to match the new environment. diff --git a/content/english/blog/writing-workshop-instructions-with-hugo.md b/content/english/blog/writing-workshop-instructions-with-hugo.md index a73055c..88f6748 100644 --- a/content/english/blog/writing-workshop-instructions-with-hugo.md +++ b/content/english/blog/writing-workshop-instructions-with-hugo.md @@ -29,6 +29,8 @@ but is difficult to work with for the participants. Hopefully [Hugo](https://gohugo.io/) can help us! + + As an example, in the rest of this article, we will craft a mini-training about Hugo!