WHO AM I?
Self Picture

Expertise

Generic placeholder image 1

Full-Stack Web

Rich Web Applications

Using modern UI technologies like Node.js, React and RxJS, building frameworks or librairies to enhance rich client applications that run across all types of devices. Using SEO and UI/UX concepts I strive to create cutting-edge, immersive engineering that delight customers, exhibit exceptional performance and work flawlessly.

With tools like Socket.io, Express, Next.js or Django, a robust and elastic back-end gets established, easily morphing to the client's requirements. Web APIs, known as REST, SOAP, or even GraphQL are flavors to customers tastes.

From the creation of simple Blogging, Chat, Landing Pages to complex CRM, CMS, Marketplaces, using MVC, OOP and Functional Programming to build Progressive Web Applications, yet determining the need for SPA or Multiple Page Applications.

Continuously improving the product through data-driven A/B testing enables experimentations with novel concepts and consequent understanding what is being shipped, meaning every feature's value.


Generic placeholder image 1

Common Runtime Services & Libraries

Runtime containers, libraries and services that power micro-services

A Micro-service design is the foundation and technology stack for the top-notch services out there. The cloud platform consists of cloud services, application libraries and application containers. Specifically, the platform provides service discovery through sidecars, distributed configuration via Ansible or Terraform, resilient and intelligent inter-process and service communication with Netflix's open-sourced Ribbon or else.

within the EC2 cloud utilizing the Amazon autoscaler, building a boto3 tool to report on cloud utilization and providing implementations for bin packing, cluster autoscaling, and custom scheduling optimizations that can be implemented through user-defined plugins.


Generic placeholder image 1

Build and Delivery Tools

Taking code from desktop to the cloud

Automate steps in your software delivery process, such as initiating code builds, running automated tests, and deploying to a staging or production environment. By standardizing development, the barrier to generating it is lowered, allowing us to keep builds modular and composable.

We require additional techniques to take these builds from the developers' desks to production, the cloud or elsewhere. It takes skills to appropriately design and plumb a tailored, sophisticated pipeline. Working with any Version Control System, any runtime Package or library, any Language, Code, Scripts and tools to provide end-to-end continous Development, Integration, Testing, Delivery, Deployment, Monitoring. the stack I tend to advise goes by Gitlab, Jenkins, Artifactory, Helm, Openshift, Prometheus & Grafana


Generic placeholder image 1

Continuous Testing

Automated Testing made easy

Crucial part of any application, handling continous testing when scaling a fast-growing architecture, brings up the need for valid development material as a must. Tools like Selenium, Appium and SonarQube enhance the process with fast responsive, and constantly improving in accuracy, UI and overall code quality for any device.

Quality Testing is essential and must be integrated with care. Being devoted to adopt a 100% coverage for any code, it sometimes appears not necessary, depending on the sprint, the expectations and appreciations of value, and the risk appetite.


Generic placeholder image 1

Data Persistence

Storing and Serving data in the Cloud.

Handling from few to over a trillion data operations per day requires an interesting mix of “off the shelf OSS” and in house projects. No single data technology can meet every use case or satisfy every latency requirement. My qualifications range from non-durable in-memory stores like Memcached, Redis, and Hollow, to searchable datastores such as Elastic and durable must-never-go-down datastores like Cassandra, MongoDB, ScyllaDB and PostgreSQL or MySQL.

These technologies always being consumed by various Cloud usage and scale, have brought up the need to use tools and services that enhance the datastores. Thus, sidecars like Netflix's Raigad and Priam definitely help with the deployment, management and backup/recovery of Elastic and Cassandra clusters. Furthermore, EVCache and Dynomite were created to use Memcached and Redis at scale. Not to mention Dyno, a client library to better consume Dynomite in the Cloud.


Generic placeholder image 1

Security

Defending at Scale

Security is an increasingly important area for organizations of all types and sizes, and I am happy to use and take part in a variety of security tools and solutions from the open source community. Heavy security-related open source efforts focus primarily on operational tools and systems to make security teams more efficient and effective when securing large and dynamic environments.

Security Monkey helps monitor and secure large AWS-based environments, allowing security teams to identify potential security weaknesses. Scumblr is an intelligence gathering tool that leverages Internet-wide targeted searches to surface specific security issues for investigation. Stethoscope is a web application that collects information from existing systems management tools (e.g., JAMF or LANDESK) on a given employee’s devices and gives them clear and specific recommendations for securing their systems.

But also Container Analyzing with AquaSec and Snyk, Binary checking with Checkmarx. Implementing RBAC, ABAC or PBAC solutions, Resources Scalp (requests/limits) via Kubernetes Custom Resource Definitions, the use of Helm Charts or Vault for Secrets, help inforce a deep and neat Security Architecture known nowadays as DevSecOps.


Generic placeholder image 1

Insight, Reliability and Performance

Providing Actionable Insights

Telemetry and metrics play a critical role in the operations of any company, and at more than a billion metrics per minute flowing into Atlas, CloudWatch, Splunk or the combination of Open-Sourced projects like Prometheus & Grafana. As time-series telemetry platforms, they play a critical role nowadays. However, Operational Insight is considered a higher-order family of products, including the ability to understand the current components of present cloud ecosystem.

Effective performance instrumentation allows engineers to drill quickly on a massive volume of metrics, making critical decisions quickly and efficiently. Utlimately, the goal is to expose high-resolution host-level metrics with minimal overhead.

Being able to understand the current state of our complex microservice architecture at a glance is crucial when making remediation decisions.

Finally to validate reliability, I use Chaos Monkey which tests instances for random failures, along with the Simian Army.


:fire:

Loading...