![](/static/cv.jpg)
What was the reason I chose to study the language of computers?
My journey to programming started when the pandemic was about to start. And we were supposed to stay home. I had just ended my academic and military service. I was thinking about having a persistent job which I could rely on it, although had a lot of experience in different fields as its shown in my Linkedin profile when I was under-graduate, but this time I chose a new field that I would love to spend most of my time in, and I was talented.
What I have learned through my journey:
- Deep understanding of Python and Object Oriented Programming
- Knowledge of writing micro service web applications with Flask framework
- Understanding of cloud-native methodologies and MVC design patterns
- Virtualization with semi-cloud MAAS (Metal as a Service), and automated deployment for provisioning with Terraform and also configuration management with Ansible
- Working with SQL and NoSQL databases, very good experience in SQL language and also capable of handling complex SQL queries with Python Pandas Dataframes
- Containerizing system services with high availability architecture
- Monitoring tools like: Grafana, Prometheus, Loki, Promtail, Fluentbit and also capable writting Prometheus exporters with Python
- DevOps orchestration tools like: Docker swarm and Kubernetes
- Full automation of CI/CD in Jenkins with the help of Groovy language
- Familiar with basic security considerations like Brute-force attack or SYN flood DDoS attack
- Documentation with Wiki_js
Work Experience In DevOps
Junior DevOps Engineer at Zitel
My job experience in Charging department of zi-tel:
- Managing 5 bare-metal servers as a swarm cluster with Portainer.
- Implementing monitoring tools like: Grafana-Loki-Fluentbit-Prometheus-alertmanager.
- Managing Freeradius as a AAA (authentication, authorization, accounting) service for p2p customers. Including Telnet authentication for BRAS and DSLAMs devices. Also implementing rate-limit bandwidth for Mikrotik and Cisco devices.
- Writing a Python program for collecting MAC address of p2p users and updating gestioIP database periodically by ssh-ing into all switches of network infrastructure.
- A Python program that collects upload and download of every radio site and their availability metrics, in order to generate Excel reports hourly and sending them through FTP protocol to Regulatory.
- Creating disaster recovery plan for my services with documentation.
Technologies that I used in this job:
- Python
- Ansible
- Bash
- Mysql
- Percona
- ProxySQL
- Mariadb
- Couchbase
- MongoDB
- Docker
- Swarm
- Grafana
- Prometheus
- Loki
- Promtail
- Fluentbit
- Glusterfs
- Zabbix
- Freeradius
DevOps Engineer at Ernyka Group
My job description in datacenter department of Ernyka Group was to work on Blockchain Nodes as a Service for their own Cloud platform.
My job experience:
- Full nodes synced: Bitcoin, Algorand, Binance smartchain, Cardano, Litecoin, Tron
- Archive nodes synced: Etherium, Binance smartchain
- Containerizing synced nodes above
- R&D in troubleshooting Geth (Go-Etherium) client
- Using zfs pool software for better performance of passthrough disks
- Creating central dashboard in Grafana to monitor syncing status of nodes and also monitoring of all Geth clients in eth-netstats
- R&D in Geth client disaster recovery solutions (e.g. moving data dir or restoring backups)
- R&D in Etherium Geth client memory cache parameters and their impact on resource usage and LevelDB database compaction
- Working with HTTP and gRPC api of ETH, TRX, BTC, BSC
- R&D in different configs of Geth client such as different database engines like Pebble DB or using Freezer
- Documentation of synced blockchain nodes in Wiki_js
Technologies that I used in this job:
- Bash
- Git
- Geth
- bitcoind and it's forks
- JavaTron
- Docker
- Grafana
- Prometheus
- Loki
- eth-netstats
- Fluentbit
- Wiki_js
- zfs
DevOps Monitoring at Digikala
In the DevOps department of Digikala, I worked in the monitoring section, where my responsibilities included maintaining and extending monitoring services and clusters.
My job experience:
- Creating simple and efficient clusters for high availability with the docker swarm
- Migrating systemd services into docker containers
- Maintaining Prometheus Alertmanager Grafana Blackbox for metric based monitoring
- Maintaining Elasticsearch-shards Logstash Kibana for collecting logs from reverse-proxies like Nginx or HAProxy
- Separating monitoring clusters from production zone because of security concerns
- Creating custom Prometheus exporters
- Creating Jira plugin for sending jira issues to Slack as a simplified issue message
- Creating an agile documentation tool as a single entity for docs, videos and To-do lists with Wiki_js
- Creating bash scripts for monitoring and restarting system services or containers as self-healing procedures
Technologies that I used in this job:
- Bash
- Salt
- Prometheus
- Grafana
- Alertmanager
- Blackbox
- Elasticsearch
- Kibana
- Swarm
- Docker
- Python-flask
- Wiki_js
Skills & Tools
Frontend:
-
Photoshop/Illustrator
-
HTML/CSS/Bootstrap
Backend:
-
Python
-
Flask
-
Go
DevOps Tools:
-
Bash Scripts
-
Groovy
-
Git
-
Terraform
-
Ansible
-
MAAS
-
Docker
-
Docker Swarm
-
Grafana-Loki-Fluentbit
-
Kubernetes
-
Jenkins
-
Traefik-Nginx
Databases:
-
ORMs in python and golang
-
SQL Servers
-
NoSQL Servers
Network:
- - NetBox and GestioIP IPAM Services
- - Network+
- - Configuring Network Topology
- - IP Firewall (NAT, Mangle)
- - Routing
- - Mikrotik Routers
Learning skills
- Kubernetes
- Golang
Linkedin Skill Assessment Badges
- Python
- Linux
- REST APIs
Education
-
BSc in MetallurgyKhaje Nasir University of Technology
Language
- Persian (Native)
- English (Professional)
Interests
- Books
- Movies & Documentaries
- Climbing
- Chess
- Biking