Laravel creates an Admin API from scratch – Part 2
Add Role and Permissions based authentication to Laravel API
Laravel creates an Admin API from scratch — Part 3
How to Upgrade From Laravel 9.x to Laravel 10.x
Laravel creates an admin panel from scratch — Laravel 10 Upgrade Guide— Part 23
Laravel: Automate Code Formatting!
Pint is one the newest members of Laravel first-party packages and will help us to have more readable and consistent codes.
Installing and Configuring Laravel Pint is so easy and It is built on top of PHP-CS-Fixer so it has tones of rules to fix code style issues. (You don’t need Laravel 9 to use Pint and it’s a zero dependency package)
But running Pint is quite painful because every time we want to push our changes to the remote repository we have to run below command manually:
./vendor/bin/pint --dirty
The --dirty
flag will run PHP-CS-Fixer for changed files only. If we want to check styles for all files just remove --dirty
flag.
In this article we want to simply automate running code styles check with Pint before committing any changed file so even team developers will have a well defined code structure and don’t need to run Laravel Pint every time before we push our codes to remote repo!
Before we start, be careful this is a very simple setup and you can add as many options as you want to Laravel Pint.
In order to run ./vendor/bin/pint --dirty
just before every commit, we should use the pre-commit
hook inside .git
folder.
First of all we will create a scripts
folder inside our root Laravel directory. In this folder we will have a setup.sh
file and pre-commit
file without any extension.
scripts/
setup.sh
pre-commit
Inside our setup.sh
we have:
#! /usr/bin/env bash cp scripts/pre-commit .git/hooks/pre-commit chmod +x .git/hooks/pre-commit
And write the following lines on pre-commit
file:
#! /usr/bin/env bash echo "Check php code styles..." echo "Running PHP cs-fixer" ./vendor/bin/pint --dirty git add . echo "Done!"
Second of all, we should go to composer.json
file and on the scripts
object add this line: (If post-install-cmd
key does not exist, you should create post-install-cmd
part and then add below)
"post-install-cmd": [ "bash scripts/setup.sh" ]
Third of all, we will require Pint package by this:
composer require laravel/pint --dev
And To be sure Don’t Forget to run:
composer install
The composer install
command will add the pre-commit
hook to our .git
folder and after that we are ready to go!
From now on, we can simply write our code and just before we commit our changes the Pint command will run automatically and will fix our code styles!
Pint use Laravel code styles as defaultbut if you want to use psr-12
like me, you can create a pint.json
file inside the root directory of your Laravel project and copy below json
to have a more opinionated PHP code styles:
{ "preset": "psr12", "rules": { "simplified_null_return": true, "blank_line_before_statement": { "statements": ["return", "try"] }, "binary_operator_spaces": { "operators": { "=>": "align_single_space_minimal" } }, "trim_array_spaces": false, "new_with_braces": { "anonymous_class": false } } }
This is a simple config for our Pint command and will simplify null returns and define an equal indentation for arrays. You can check all PHP-CS-Fixer options here!
READ MORE:
Laravel works with Large database records using the chunk method
Laravel Chunking Database Queries Result
Restructuring a Laravel controller using Services & Action Classes
Laravel Refactoring — Laravel creates an admin panel from scratch — Part 11
Resize ext4 file system
Using Growpart
$ growpart /dev/sda 1 CHANGED: partition=1 start=2048 old: size=39999455 end=40001503 new: size=80000991,end=80003039 $ resize2fs /dev/sda1 resize2fs 1.45.4 (23-Sep-2019) Filesystem at /dev/sda1 is mounted on /; on-line resizing required old_desc_blocks = 3, new_desc_blocks = 5 The filesystem on /dev/sda1 is now 10000123 (4k) blocks long.
Using Parted & resize2fs
apt-get -y install parted parted /dev/vda unit s print all # print current data for a case parted /dev/vda resizepart 2 yes -- -1s # resize /dev/vda2 first parted /dev/vda resizepart 5 yes -- -1s # resize /dev/vda5 partprobe /dev/vda # re-read partition table resize2fs /dev/vda5 # get your space
Parted doesn’t work on ext4 on Centos. I had to use fdisk to delete and recreate the partition, which (I validated) works without losing data. I followed the steps at http://geekpeek.net/resize-filesystem-fdisk-resize2fs/. Here they are, in a nutshell:
$ sudo fdisk /dev/sdx > c > u > p > d > p > w $ sudo fdisk /dev/sdx > c > u > p > n > p > 1 > (default) > (default) > p > w
sumber: https://serverfault.com/questions/509468/how-to-extend-an-ext4-partition-and-filesystem
Proxmox After Install
After install:
Reference:
1. Why ?
There are lot of ways of how you can manage you company, home or corporate DNS zones. You can offload this task to any DNS registar, you can use any available DNS server software with any back-end that you like, or … you can use Zabbix and particularly Zabbix database as your trusty backend. Let’s look at the simple fact, that you already installed and configured Zabbix on your network. And you invested considerable time and effort of doing so. And. looking inside Zabbix, you see, that it knows a great deal about your infrastructure. It’s host names and IP addresses. Maybe, you are also running discovery process on your network and keeping this portion of configuration up-to date. Maybe you already integrate Zabbix with your inventory system. And with your ticketing system. If you did not done that already, maybe you should. So, your Zabbix installation already one of the central points of your enterprise management. Any reason, why you still using vi to manage your DNS zones or paying somebody to do this for you, when you have all you needed at your fingertips ?
2. What you will need ?
Aside from Zabbix itself, not much:
- Unbound DNS resolver
- Python development infrastructure (to properly build Unbound)
- Redis
- redis-py
Some time and software development skills …
3. Prepare your environment.
I will not be covering on how to install and configure Python on your target hosts. You can install it from rpm/deb repositories or compile yourself from the scratch. Second, download unbound DNS resolver and compile it. I am doing this using command
./configure --with-libevent --with-pyunbound --with-pthreads --with-ssl --with-pythonmodule
Please note, that you shall have development files for libevent, openssl, posix threads and the Python on your host.
Next, compile and install REDIS server. I will leave you with excellent Redis documentation as your guide through this process. All I want to say: “It is not difficult to do”. After you’ve compiled and installed Redis, install Python redis module – redis-py.
4. Overview of the design.
You will have number of components on your Zabbix-DNS infrastructure.
- REDIS servers. Will be serving as a primary storage for you direct and reverse mappings. Depending on the size of your DNS zones, you may want to scale the memory for the hosts on which you will run your
REDIS servers. All REDIS servers are configured for persistency. - DNS_REDIS_SYNC. Script, which will query SQL table interfaces from zabbix database and populate master REDIS server.
- resolver.py. Unbound python script, which will provide a proper interfacing between zabbix database, REDIS and UNBOUND resolver
5. Masters and slaves.
I am intentionaly insisiting on more complicated master-slave configuration for your installation. When you will need to scale your DNS cluster, you will appretiate that you’ve done this. Depending on your Zabbix configuration, you may be choosing appropriate location for your master REDIS server and DNS_REDIS_SYNC process.
Depending on the size of your Zabbix and number of NVPS, you may consider to perform “select” operations on SQL table “interface” on the less busy with inserts and updates slave MySQL server.
How to setup master-slave MySQL replication is outside of the scope of this article.
Google it. Slave REDIS node shall be local to a DNS resolver.
6. DNS_REDIS_SYNC
DNS_REDIS_SYNC is a simple Python (or whatever language you choose to use, as long as it can interface with MySQL and REDIS) script, which designed to populate master REDIS storage. In order to get information from table interface, you may issue query
select interfaceid,ip,dns from interface where type = 1
When you’ve got all you Name->IP associations from Zabbix database, start to populate direct and reverse zones in REDIS, like
SET A:%(name) %(ip)
SET PTR:%(ip) %(name)
you do not want keys to stick in you REDIS forever, so I recommend to set conservative expiration for your keys. See Chapter #7
EXPIRE A:%(name) %(expiration_time_in_sec)
EXPIRE PTR:%(ip) %(expiration_time_in_sec)
That’s it. Your REDIS database is ready to be used by resolver.py module.
7. Expire or not to expire.
The easiest and more dangerous way to remove the old info from DNS zones stored in REDIS, is to use REDIS EXPIRE commands and capabilities. This will work great if you never get in the situation like this
Downtime of the Zabbix MySQL server > Key expiration time.
One way on how to deal with that situation is to monitor the downtime of the primary Zabbix MySQL from another Zabbix server, which configured to monitor primary server (you shall have this server already) and when downtime crosses pessimistic threshold, execute Action script, which will extend TTL for the keys in the master REDIS server.
8. Anathomy of the resolver.py
Before you will write your resolver.py, consult Unbund documentation on how to write unbound python modules and how to use unbound module. Also, you shall be aware of “gotcha” for the resolver.py . Since it is executed in “embedded Python”, it does not inherit information about location of the some of the python modules. Be prepared to define path to those modules using sys.path.append(…) calls.
Main callback for the query processing inside resolver.py will be function “operate(id, event, qstate, qdata)”. Parameters are:
- id, is a module identifier (integer);
- event, type of the event accepted by module. Look at the documentation of which event types are there. For the resolver, we do need to catch MODULE_EVENT_PASS and MODULE_EVENT_NEW
- qstate, is a module_qstate data structure
- qdata , is a query_info data structure
First, qstate.qinfo.qname_str will contain your query. The best way to detect if this is a query of the direct or reverse zones is to issue this call
<em>socket.inet_aton(qstate.qinfo.qname_str[:-1])</em>
and then catch the exceptions. If you have an exception, then it direct zone, of not – reverse.
Second, you will be needed to build a return message, like this:
msg = DNSMessage(qstate.qinfo.qname_str, RR_TYPE_A, RR_CLASS_IN, PKT_QR | PKT_RA | PKT_AA)
Then, depend on which zone you shall query, you sent a one of the following requests to REDIS:
GET A:%(name)
GET PTR:%(ip)
if REDIS returns None, you shall query Zabbix mysql database with one of the following queries:
select interfaceid,ip,dns from interface where type = 1 and dns = ‘%(name)’;
select interfaceid,ip,dns from interface where type = 1 and ip = ‘%(ip)’;
If MySQL query returned data, you shall populate REDIS as described in Chapter 6, fill return message and invalidate and re-populate UNBOUND cache using following calls:
invalidateQueryInCache(qstate, qstate.return_msg.qinfo)
storeQueryInCache(qstate, qstate.return_msg.qinfo, qstate.return_msg.rep, 0)
Return message is filled by appending results to a msg.answer
"%(name) 900 IN A %(ip)" "%(in_addr_arpa) 900 IN PTR %(name)."
for direct and reverse zones.
qstate alse shall be updated with information about return message before you manipulate with UNBOUND cache
msg.set_return_msg(qstate)
9. Summary.
Well, now you know enough on how to integrate information from your Zabbix instance into your enterprise.