Home News Feeds Planet MySQL

Newsflash

วงการครู โดยครูแชมป์ 1

สวัสดีครับ ยินดีต้อนรับเข้าสู่วงการครูนะครับ ขอเชิญพี่น้องครูไทยทุกท่าน ขอเชิญมาร่วมสร้างความก้าวหน้าให้วงการการศึกษากันครับ

Newsfeeds
Planet MySQL
Planet MySQL - https://planet.mysql.com

  • How to setup an RDS MySql (Relation Database MySql) instance on AWS
    RDS(Relational Database Service) comes under “Database” services of AWS(Amazon Web Services) Cloud. In this article, we will see how to create an RDS MySql Instance, provision and terminate it.

  • Migration from Percona XtraDB Cluster/Galera to MySQL InnoDB Cluster in 10 steps
    MySQL InnoDB Cluster is the official High Availability solution for and from MySQL. MySQL InnoDB Cluster is composed by MySQL Server, MySQL Group Replication, MySQL Router and MySQL Shell. Before InnoDB Cluster there was no standard solution for MySQL Availability and many custom solutions were used (some better than the others). But there was a good solution using some similar principles of MySQL Group Replication: Galera. Now that MySQL InnoDB Cluster is mature and easier to orchestrate than galera, I receive a lot of requests on how to migrate from Galera (or Percona XtraDB Cluster) to MySQL InnoDB Cluster. I already wrote some time ago an article on this process: how to migrate from Galera to MySQL Group Replication. In this article we will see how we can migrate from Percona XtraDB Cluster 8.0 to MySQL InnoDB Cluster 8.0 in 10 steps! Initial Situation The initial situation is illustrated in the picture below, 3 nodes, a load balancer (like HA Proxy) and applications connecting to it: (P): Primary (S): SecondaryJust some checks to see which version we are using and that all nodes are online and in sync: mysql> show status like 'wsrep%provider_v%'; +------------------------+-----------------------------------+ | Variable_name | Value | +------------------------+-----------------------------------+ | wsrep_provider_vendor | Codership Oy <info@codership.com> | | wsrep_provider_version | 4.7(r752664d) | +------------------------+-----------------------------------+ mysql> select @@version, @@version_comment\G *************************** 1. row *************************** @@version: 8.0.23-14.1 @@version_comment: Percona XtraDB Cluster (GPL), Release rel14, Revision d3b9a1d, WSREP version 26.4.3 mysql> select host_name, status from performance_schema.pxc_cluster_view; +-----------+--------+ | host_name | status | +-----------+--------+ | pxc2 | SYNCED | | pxc3 | SYNCED | | pxc1 | SYNCED | +-----------+--------+ We also need to check that GTIDs are enabled, if not, it’s highly recommended to enable them: mysql> select @@gtid_mode; +-------------+ | @@gtid_mode | +-------------+ | ON | +-------------+ The 10 steps to migrate from Galera PXC to MySQL InnoDB Cluster are: Remove one node from the cluster and install MySQL on itCreate an InnoDB Cluster from that single nodeSetup Asynchronous ReplicationRemove a second node from the cluster and install MySQL on itJoin MySQL InnoDB ClusterInstall MySQL Router & Bootstrap itPoint application to MySQL RouterStop Asynchronous ReplicationStop last PXC node and install MySQL on it Join the last node to MySQL InnoDB ClusterStep 1: remove one node We will remove pxc3 from the cluster (we always remove a Secondary node). Then we install MySQL on it: [root@pxc3 ~]# systemctl stop mysql We can see that pxc3 is not part of the cluster anymore, from another node: mysql> select host_name, status from performance_schema.pxc_cluster_view; +-----------+--------+ | host_name | status | +-----------+--------+ | pxc2 | SYNCED | | pxc1 | SYNCED | +-----------+--------+ Before installing MySQL on it, we need to remove all Galera dedicated configuration settings in my.cnf. In my case I commented all lines starting with wsrep_ and the line staring with pxc_. Now we can install MySQL on it: [root@pxc3 ~]# rpm -ivh https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm [root@pxc3 ~]# yum swap percona-xtradb-cluster-* mysql-community-server ... ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: mysql-community-server x86_64 8.0.27-1.el7 mysql80-community 448 M Removing: percona-xtradb-cluster x86_64 8.0.23-14.1.el7 @pxc-80-release-x86_64 0.0 percona-xtradb-cluster-client x86_64 8.0.23-14.1.el7 @pxc-80-release-x86_64 62 M percona-xtradb-cluster-server x86_64 8.0.23-14.1.el7 @pxc-80-release-x86_64 421 M percona-xtradb-cluster-shared x86_64 8.0.23-14.1.el7 @pxc-80-release-x86_64 7.1 M percona-xtradb-cluster-shared-compat x86_64 8.0.23-14.1.el7 @pxc-80-release-x86_64 6.7 M Installing for dependencies: mysql-community-client x86_64 8.0.27-1.el7 mysql80-community 53 M mysql-community-client-plugins x86_64 8.0.27-1.el7 mysql80-community 5.7 M mysql-community-libs x86_64 8.0.27-1.el7 mysql80-community 4.7 M Updating for dependencies: mysql-community-common x86_64 8.0.27-1.el7 mysql80-community 627 k Removing for dependencies: perl-DBD-MySQL x86_64 4.023-6.0.1.el7 @ol7_latest 323 k Transaction Summary ================================================================================ Install 1 Package (+3 Dependent packages) Upgrade ( 1 Dependent package) Remove 5 Packages (+1 Dependent package) Total download size: 512 M Is this ok [y/d/N]: When installed, we can start MySQL: [root@pxc3 ~]# systemctl start mysqld [root@pxc3 ~]# systemctl status mysqld ● mysqld.service - MySQL Server Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-19 15:39:58 GMT; 5s ago Docs: man:mysqld(8) http://dev.mysql.com/doc/refman/en/using-systemd.html Process: 9446 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS) Main PID: 9474 (mysqld) Status: "Server is operational" Memory: 422.9M CGroup: /system.slice/mysqld.service └─9474 /usr/sbin/mysqld Oct 19 15:39:56 pxc3 systemd[1]: Starting MySQL Server... Oct 19 15:39:58 pxc3 systemd[1]: Started MySQL Server. This is what we have at the send of step 1: for readability, I changed the hostname of the nodes. So pxc3 became mysql1. Step 2: MySQL InnoDB Cluster Creation To start this part, we will first install MySQL Shell: [root@mysql1 ~]# yum install mysql-shell As you can see above, the hostname is changed. If you never used MySQL Shell, I recommend you to read this article: Reminder When Using MySQL Shell. We will now connect to our MySQL instance (using localhost) and configure it to be ready for InnoDB Cluster: [root@mysql1 ~]# mysqlsh root@localhost This how it will look like: JS > dba.configureInstance() MySQL Shell might complain about the user used to connect (in my case root that can only connect from localhost). I highly recommend to create a dedicated user to manage the cluster (option 2): I created a user called clusteradmin and agreed with all the eventual changes requested by MySQL Shell: Then I connect back to my instance using the new created user: JS > \c clusteradmin@mysql1 Creating a session to 'clusteradmin@mysql1' Please provide the password for 'clusteradmin@mysql1': ********* Save password for 'clusteradmin@mysql1'? [Y]es/[N]o/Ne[v]er (default No): yes Fetching schema names for autocompletion... Press ^C to stop. Closing old connection... Your MySQL connection id is 13 (X protocol) Server version: 8.0.27 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. And we can create our new MySQL InnoDB Cluster: JS > cluster=dba.createCluster("MyCluster") A new InnoDB cluster will be created on instance 'mysql1:3306'. Validating instance configuration at mysql1:3306... This instance reports its own address as mysql1:3306 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using 'mysql1:33061'. Use the localAddress option to override. Creating InnoDB cluster 'MyCluster' on 'mysql1:3306'... Adding Seed Instance... Cluster successfully created. Use Cluster.addInstance() to add MySQL instances. At least 3 instances are needed for the cluster to be able to withstand up to one server failure. <Cluster:MyCluster> And we can check our (current) single node InnoDB Cluster: JS > cluster.status() { "clusterName": "MyCluster", "defaultReplicaSet": { "name": "default", "primary": "mysql1:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "mysql1:3306": { "address": "mysql1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.27" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "mysql1:3306" } This is the situation a the end of step 2: Step 3: Setup Asynchronous Replication Since we removed the server from the Galera cluster, our applications are still writing to PXC… it’s time to catch up ! To achieve this task, we will setup asynchronous replication from the Primary node of PXC and the Primary node (it’s the only one for the moment) of our InnoDB Cluster. On pxc1 (the Primary), we will create a dedicated user for this replication task: pxc1-mysql> CREATE USER repl@'%' IDENTIFIED BY 'Rpl001#!' REQUIRE SSL; pxc1-mysql> GRANT REPLICATION SLAVE on *.* to repl@'%'; On pxc1 we can check the value of GTID_EXECUTED and compare it the one of mysql1: pxc1-mysql> select @@gtid_executed\G *************************** 1. row *************************** @@gtid_executed: a18338f8-30cc-11ec-85df-53e2b3eb859d:1-888175 mysql1> select @@gtid_executed\G *************************** 1. row *************************** @@gtid_executed: a18338f8-30cc-11ec-85df-53e2b3eb859d:1-555107, fc4f6b42-30f5-11ec-b31e-02001708e20d:1-59, fc4f6f48-30f5-11ec-b31e-02001708e20d:1 We can see that we are missing several transactions from the current cluster in production (PXC). The cluster’s uuid is a18338f8-30cc-11ec-85df-53e2b3eb859d. It’s time to finish the replication setup and start it. On mysql1, we do: mysql1> CHANGE REPLICATION SOURCE TO SOURCE_HOST="pxc1", SOURCE_USER="repl", SOURCE_PASSWORD="Rpl001#!", SOURCE_AUTO_POSITION=1, SOURCE_SSL=1 FOR CHANNEL "async_from_pxc"; mysql1> START REPLICA FOR CHANNEL "async_from_pxc"; Now mysql1 will start to catch up… mysql1> show replica status for channel "async_from_pxc"\G *************************** 1. row *************************** Replica_IO_State: Waiting for source to send event Source_Host: pxc1 Source_User: repl Source_Port: 3306 Connect_Retry: 60 Source_Log_File: binlog.000002 Read_Source_Log_Pos: 569358564 Relay_Log_File: mysql1-relay-bin-async_from_pxc.000002 Relay_Log_Pos: 47544213 Relay_Source_Log_File: binlog.000002 Replica_IO_Running: Yes Replica_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Source_Log_Pos: 392505871 Relay_Log_Space: 224397131 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Source_SSL_Allowed: Yes Source_SSL_CA_File: Source_SSL_CA_Path: Source_SSL_Cert: Source_SSL_Cipher: Source_SSL_Key: Seconds_Behind_Source: 9020 Source_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Source_Server_Id: 1 Source_UUID: 9ddb7553-30cc-11ec-bae3-00001701a2bb Source_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Replica_SQL_Running_State: Waiting for dependent transaction to commit Source_Retry_Count: 86400 Source_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Source_SSL_Crl: Source_SSL_Crlpath: Retrieved_Gtid_Set: a18338f8-30cc-11ec-85df-53e2b3eb859d:555108-1018737 Executed_Gtid_Set: a18338f8-30cc-11ec-85df-53e2b3eb859d:1-653378, fc4f6b42-30f5-11ec-b31e-02001708e20d:1-59, fc4f6f48-30f5-11ec-b31e-02001708e20d:1 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: async_from_pxc Source_TLS_Version: Source_public_key_path: Get_Source_public_key: 0 Network_Namespace: This is the situation at the end of step 3: Step 4: Remove the second Primary We will now stop mysqld on pxc2 and install MySQL on it like we did on step 1. [root@pxc2 ~]# systemctl stop mysql [root@pxc2 ~]# rpm -ivh https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm [root@pxc2 ~]# yum swap percona-xtradb-cluster-* mysql-community-server [root@pxc2 ~]# yum install mysql-shell -y And here the situation: Step 5: Add a new instance to MySQL InnoDB Cluster We will configure mysql2 (previously called pxc2) and add it to the cluster. We need to first remove all galera and pxc related entries from my.cnf. When done we can start mysqld: [root@pxc2 ~]# vi /etc/my.cnf [root@pxc2 ~]# systemctl start mysqld We use MySQL Shell to configure the instance (like we did on Step 2) and we create the same user (clusteradmin) with the same password. When done, from mysql1, we can add the new instance to MySQL InnoDB Cluster like this: JS > cluster.addInstance("mysql2") If we check the status of the cluster we will notice an error: JS > cluster.status() { "clusterName": "MyCluster", "defaultReplicaSet": { "name": "default", "primary": "mysql1:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "mysql1:3306": { "address": "mysql1:3306", "instanceErrors": [ "ERROR: Unrecognized replication channel 'async_from_pxc' found. Unmanaged replication channels are not supported." ], "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.27" }, "mysql2:3306": { "address": "mysql2:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.27" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "mysql1:3306" } We can ignore this issue for the moment. This is the situation at the end of Step 5: Step 6: MySQL Router We need to install MySQL Router. The recommended location is on the application server(s). MySQL Router should be seen as an extension to the MySQL Connector. I only use one on this schema. On the app server we do: [root@app1 ~]# rpm -ivh https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm [root@app1 ~]# yum install -y mysql-router We can bootstrap MySQL Router: [root@app11 ~]# mysqlrouter --bootstrap clusteradmin@mysql1 \ --user mysqlrouter --conf-use-gr-notifications Please enter MySQL password for clusteradmin: # Bootstrapping system MySQL Router instance... - Creating account(s) (only those that are needed, if any) - Verifying account (using it to run SQL queries that would be run by Router) - Storing account in keyring - Adjusting permissions of generated files - Creating configuration /etc/mysqlrouter/mysqlrouter.conf Existing configuration backed up to '/etc/mysqlrouter/mysqlrouter.conf.bak' # MySQL Router configured for the InnoDB Cluster 'MyCluster' After this MySQL Router has been started with the generated configuration $ /etc/init.d/mysqlrouter restart or $ systemctl start mysqlrouter or $ mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf InnoDB Cluster 'MyCluster' can be reached by connecting to: # MySQL Classic protocol - Read/Write Connections: localhost:6446 - Read/Only Connections: localhost:6447 # MySQL X protocol - Read/Write Connections: localhost:6448 - Read/Only Connections: localhost:6449 We can start MySQL Router: [root@app1 ~]# systemctl start mysqlrouter [root@app1 ~]# systemctl status mysqlrouter ● mysqlrouter.service - MySQL Router Loaded: loaded (/usr/lib/systemd/system/mysqlrouter.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-19 18:15:57 GMT; 4s ago Main PID: 18352 (mysqlrouter) Status: "running" Memory: 13.7M CGroup: /system.slice/mysqlrouter.service └─18352 /usr/bin/mysqlrouter Oct 19 18:15:57 pxc1 systemd[1]: Starting MySQL Router... Oct 19 18:15:57 pxc1 mysqlrouter[18352]: logging facility initialized, switching l...on Oct 19 18:15:57 pxc1 systemd[1]: Started MySQL Router. Hint: Some lines were ellipsized, use -l to show in full. This is the situation at the end of Step 6: Step 7: Switch application connections We point the application to MySQL Router (localhost:6446 for Read/Write traffic) and we can see that now GTIDs using MySQL InnoDB Cluster’s UUID are processed: This is the actual situation: Step 8: Stop Asynchronous Replication We can now stop the asynchronous replication and forget about it: mysql1> stop replica for channel "async_from_pxc"; mysql1> reset replica all for channel "async_from_pxc"; Step 9: Bye Bye Galera and PXC Now we can stop PXC on pxc1 and install MySQL Server like we did on Step 1: [root@pxc1 ~]# systemctl stop mysql [root@pxc1 ~]# rpm -ivh https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm [root@pxc1 ~]# yum swap percona-xtradb-cluster-* mysql-community-server And we are ready for the last step ! Step 10: MySQL InnoDB Cluster with 3 nodes We need to configure my.cnf and configure the instance like in Step 5, I also changed the name from px1 to mysql3: [root@mysql3 ~]# vi /etc/my.cnf [root@mysql3 ~]# systemctl start mysqld We can configure the instance: [root@mysql3 ~]# mysqlsh root@localhost JS > dba.configureInstance() Now, on mysql1 (or connecting to it), we can add the new instance to the cluster: [root@mysql1 ~]# mysqlsh clusteradmin@mysql1 JS > cluster=dba.getCluster() JS > cluster.addInstance("mysql3") Finally we can check the status of our new MySQL InnoDB Cluster: JS > cluster.status() { "clusterName": "MyCluster", "defaultReplicaSet": { "name": "default", "primary": "mysql1:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "mysql1:3306": { "address": "mysql1:3306", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.27" }, "mysql2:3306": { "address": "mysql2:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.27" }, "mysql3:3306": { "address": "mysql3:3306", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.27" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "mysql1:3306" } Conclusion As you can see it’s very easy to migrate from Galera to MySQL InnoDB Cluster. Additionally, creating and managing the Cluster is much easier with MySQL Shell. It’s even possible to link multiple InnoDB Clusters and manage everything easily. Check MySQL InnoDB ClusterSet. Enjoy MySQL, MySQL Shell and MySQL InnoDB Cluster !

  • How to Retrieve MySQL Last Insert ID in CodeIgniter 4
    I use CodeIgniter 4 a great deal for learning, personal projects, and application development. In this post, I cover 2 different methods you can use and retrieve the MySQL Last Insert ID value after executing an INSERT statement in CodeIgniter. Continue reading for more information… Self-Promotion: If you enjoy the content written here, by all means, share this blog and your favorite post(s) with others who may benefit from or like it as well. Since coffee is my favorite drink, you can even buy me one if you would like! Model Setup Starting out, I use the CLI Generator feature and create a Model by executing this code in the terminal in the project root: php spark make:model user --suffix I have removed much of the boilerplate code, keeping this bare-bones UserModel. I will fill in more functionality as needed: Creating a Home Controller users() method Visiting the URL /cicrud/home/users, executes this users() method in the default Home Controller file: Important! Assume the $data associative array has been validated and sanitized prior to being passed into the Model insert() method. Always validate and sanitize your data! Are you a Medium member? If so, receive an email notification each time I publish a blog post if you prefer the Medium platform. Not a member? No worries! Use my sign-up link (I will get a commission at no extra cost to you) and join. I really enjoy reading all the great content there and I know you will too!!! Checking the data in the user’s table, we can see the row has been added: There are times as a Developer you need to capture the Last Insert ID value from a MySQL table column defined with the AUTO_INCREMENT attribute. MySQL has a native LAST_INSERT_ID() information function you can use in queries and retrieve this value. If you are using CodeIgniter 4, there are a couple of different methods you can leverage, depending on how you are inserting the data, allowing you to retrieve the Last Insert ID value. I’ll cover them both in this post. CodeIgniter 4 Model getInsertID() method CodeIgniter 4 Models have built-in methods to handle most of the CRUD operations you need in an application. I’ve written about the CodeIgniter 4 Model insert() method in this Medium post, CodeIgniter 4 CRUD Series with MySQL: Create. If you have inserted a new row using the Model insert() method and need the Last Insert ID value, how can you get it? You can call the Model getInsertID() method on your current Model instance and retrieve the value. As of the time of writing, I haven’t found the official documentation on this method. For the purposes of this blog post, I am piggybacking off of this post I found on Stackoverflow. Feel free to provide relevant resources or links in the comments below for this CodeIgniter 4 method. See the updated users() method below and the code: Revisiting the user’s table shows the value as well: I write a weekly email about SQL/PHP that I’m studying, learning about, and interested in. If that sounds like something you would like to be a part of, find out more about it here. Thank you! CodeIgniter 4 Query Builder insertID() In the UserModel Model file, I have created this dataInsert() method: Using the Query Builder Class, we can INSERT and retrieve the Last Insert ID value, leveraging the built-in $builder insertID() method, which is called on the $db connection instance. In the Home Controller, I have edited the users() method to use the UserModel dataInsert() method shown in this code snippet public function users() { $user_model = new UserModel(); $data = [ 'first_name' => 'Jane', 'last_name' => 'Doe', 'dob' => '2021-01-01' ]; $last_insert_id = $user_model->dataInsert($data); echo 'Last Insert ID is: '. $last_insert_id; } view raw user_dataInsert.php hosted with ❤ by GitHub Again, visiting the URL, /cicrud/home/users, executes the users() Home Controller method. However, in this instance, the Query Builder insertID() method retrieves the Last Insert ID value instead of the Model getInsertID() method. And there you have it. Two ways to retrieve the Last Insert ID value in CodeIgniter 4. If you know of any others, please share them in the comments below so that I and other readers know of them as well. Thank you! Similar Last Insert ID blog posts I have written several blog posts about MySQL Last Insert ID, covering a couple of different environments/languages, and want to share them below with any readers who are interested. Please share them along as well! Thank you! MySQL’s AUTO_INCREMENT Attribute MySQL Shell get_auto_increment_value() method – Python modePHP PDO lastInsertId() methodThe AUTO_INCREMENT column attribute in MySQL – A beginning perspectiveIf you have any questions or see any mistakes in the code, please let me know via the comments. Constructive comments help me provide accurate blog posts and are much appreciated. Thank you for reading! Like what you have read? See anything incorrect? Please comment below and thank you for reading!!! A Call To Action! Thank you for taking the time to read this post. I truly hope you discovered something interesting and enlightening. Please share your findings here, with someone else you know who would get the same value out of it as well. Visit the Portfolio-Projects page to see blog posts/technical writing I have completed for clients. To receive email notifications (Never Spam) from this blog (“Digital Owl’s Prose”) for the latest blog posts as they are published, please subscribe (of your own volition) by clicking the ‘Click To Subscribe!’ button in the sidebar on the homepage! (Feel free at any time to review the Digital Owl’s Prose Privacy Policy Page for any questions you may have about: email updates, opt-in, opt-out, contact forms, etc…) Be sure and visit the “Best Of” page for a collection of my best blog posts. Josh Otwell has a passion to study and grow as a SQL Developer and blogger. Other favorite activities find him with his nose buried in a good book, article, or the Linux command line. Among those, he shares a love of tabletop RPG games, reading fantasy novels, and spending time with his wife and two daughters. Disclaimer: The examples presented in this post are hypothetical ideas of how to achieve similar types of results. They are not the utmost best solution(s). The majority, if not all, of the examples provided, are performed on a personal development/learning workstation environment and should not be considered production quality or ready. Your particular goals and needs may vary. Use those practices that best benefit your needs and goals. Opinions are my own. The post How to Retrieve MySQL Last Insert ID in CodeIgniter 4 appeared first on Digital Owl's Prose.

  • Webinar: Galera Manager is now GA with ability to launch clusters on premise and in the AWS cloud
    This is a new era in deploying, managing and monitoring your Galera Cluster for MySQL, with the recently released Galera Manager, now Generally Available (GA). Galera Manager is a web-based graphical user interface (GUI) tool that allows you to fully manage clusters in Amazon Web Services, you can also deploy clusters on user provided hosts (on premise or in the cloud) and you can also fully monitor your existing clusters. In this Webinar, we will cover: Deploying a fully managed cluster in AWS Deploying a cluster on user-provided hosts Monitoring an existing cluster How we have chosen to use CLONE SST for MySQL 8 deployments How you can successfully deploy all the various servers that we support Utilising the over 600 monitoring metrics to effectively manage your Galera Cluster Join the packed webinar for discussions and live demos and take your Galera Cluster deployments further with Galera Manager. EMEA Webinar, Thursday October 28 2021, 2:00-2.50 PM CEST JOIN THE EMEA TIMEZONE WEBINAR USA Webinar, Thursday October 28 2021, 9:00-9.50 AM PDT JOIN THE USA TIMEZONE WEBINAR   DOWNLOAD GALERA MANAGER

  • MySQL Shell and extra Python modules
    When you use MySQL Shell with extra Python plugins (like these available on GitHub), sometimes, you could need extra Python modules. Some of my plugins require requests and prettytable for example. MySQL 8.0.26 uses embedded Python 3.9 and if your system doesn’t have that version (like Oracle Linux 8), you won’t be able to install the missing module. So how could we install the missing modules ? The easiest method is to use PIP, but if you don’t have PIP for Python 3.9 installed on the system, this will be more complicated…. not really ! This how to install PIP in MySQL Shell: wget https://bootstrap.pypa.io/get-pip.py mysqlsh --py -f get-pip.py And now you can use PIP to install extra modules: mysqlsh --pym pip install --user prettytable When you know how to do it, it’s very easy. Enjoy MySQL and MySQL Shell !