Planet MySQL

Planet MySQL - https://planet.mysql.com
  1. In this post, we will discuss what to do when you add more memory to your instance. Adding memory to a server where MySQL is running is common practice when scaling resources. First, Some Context Scaling resources is just adding more resources to your environment, and this can be split in two main ways: vertical scaling and horizontal scaling. Vertical scaling is increasing hardware capacity for a given instance, thus having a more powerful server, while horizontal scaling is adding more servers, a pretty standard approach for load balancing and sharding. As traffic grows, working datasets are getting bigger, and thus we start to suffer because the data that doesn’t fit into memory has to be retrieved from disk. This is a costly operation, even with modern NVME drives, so at some point, we will need to deal with either of the scaling solutions we mentioned. In this case, we will discuss adding more RAM, which is usually the fastest and easiest way to scale hardware vertically, and also having more memory is probably the main benefit for MySQL. How to Calculate Memory Utilization First of all, we need to be clear about what variables allocate memory during MySQL operations, and we will cover only commons ones as there are a bunch of them. Also, we need to know that some variables will allocate memory globally, and others will do a per-thread allocation. For the sake of simplicity, we will cover this topic considering the usage of the standard storage engine: InnoDB. We have globally allocated variables: key_buffer_size: MyISAM setting should be set to 8-16M, and anything above that is just wrong because we shouldn’t use MyISAM tables unless for a particular reason. A typical scenario is MyISAM being used by system tables only, which are small (this is valid for versions up to 5.7), and in MySQL 8 system tables were migrated to the InnoDB engine. So the impact of this variable is negligible. query_cache_size: 0 is default and removed in 8.0, so we won’t consider it. innodb_buffer_pool_size: which is the cache where InnoDB places pages to perform operations. The bigger, the better. 🙂 Of course, there are others, but their impact is minimal when running with defaults. Also, there are other variables that are allocated on each thread (or open connection):read_buffer_size, read_rnd_buffer_size, sort_buffer_size, join_buffer_size and tmp_table_size, and few others. All of them, by default, work very well as allocation is small and efficient. Hence, the main potential issue becomes where we allocate many connections that can hold these buffers for some time and add extra memory pressure. The ideal situation is to control how many connections are being opened (and used) and try to reduce that number to a sufficient number that doesn’t hurt the application. But let’s not lose the focus, we have more memory, and we need to know how to tune it properly to make the best usage. The most memory-impacting setting we need to focus on is innodb_buffer_pool_size, as this is where almost all magic happens and is usually the more significant memory consumer. There is an old rule of thumb that says, “size of this setting should be set around 75% of available memory”, and some cloud vendors setup this value to total_memory*0.75. I said “old” because that rule was good when running instances with 8G or 16G of RAM was common, so allocating roughly 6G out of 8G or 13G out of 16G used to be logical. But what if we run into an instance with 100G or even 200G? It’s not uncommon to see this type of hardware nowadays, so we will use 80G out of 100G or 160G out of 200G? Meaning, will we avoid allocating something between 20G to 40G of memory and leave that for filesystem cache operations? While these filesystem operations are not useless, I don’t see OS needing more than 4G-8G for this purpose on a dedicated DB. Also, it is recommended to use the O_DIRECT flushing method for InnoDB to bypass the filesystem cache. Example Now that we understand the primary variables allocating memory let’s check a good use case I’m currently working on. Assuming this system: $ free -m total used free shared buff/cache available Mem: 385625 307295 40921 4 37408 74865 So roughly 380G of RAM, a nice amount of memory. Now let’s check what is the maximum potential allocation considering max used connections. *A little disclaimer here, while this query is not entirely accurate and thus it can diverge from real results, we can have a sense of what is potentially going to be allocated, and we can take advantage of performance_schema database, but this may require enabling some instruments disabled by default: mysql > show global status like 'max_used_connections'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | Max_used_connections | 67 | +----------------------+-------+ 1 row in set (0.00 sec) So with a maximum of 67 connections used, we can get: mysql > SELECT ( @@key_buffer_size -> + @@innodb_buffer_pool_size -> + 67 * (@@read_buffer_size -> + @@read_rnd_buffer_size -> + @@sort_buffer_size -> + @@join_buffer_size -> + @@tmp_table_size )) / (1024*1024*1024) AS MAX_MEMORY_GB; +---------------+ | MAX_MEMORY_GB | +---------------+ | 316.4434 | +---------------+ 1 row in set (0.00 sec) So far, so good, we are within memory ranges, now let’s see how big the innodb_buffer_pool_size is and if it is well sized: mysql > SELECT (@@innodb_buffer_pool_size) / (1024*1024*1024) AS BUFFER_POOL_SIZE; +------------------+ | BUFFER_POOL_SIZE | +------------------+ | 310.0000 | +------------------+ 1 row in set (0.01 sec) So the buffer pool is 310G, roughly 82% of total memory, and total usage so far was around 84% which leaves us around 60G of memory not being used. Well, being used by filesystem cache, which, in the end, is not used by InnoDB. Ok now, let’s get to the point, how to properly configure memory to be used effectively by MySQL. From pt-mysql-summary we know that the buffer pool is fully filled: Buffer Pool Size | 310.0G Buffer Pool Fill | 100% Does this mean we need more memory? Maybe, so let’s check how many disk operations we have in an instance we know with a working dataset that doesn’t fit in memory (the very reason why we increased memory size) using with this command: mysqladmin -r -i 1 -c 60 extended-status | egrep "Innodb_buffer_pool_read_requests|Innodb_buffer_pool_reads" | Innodb_buffer_pool_read_requests | 99857480858| | Innodb_buffer_pool_reads | 598600690 | | Innodb_buffer_pool_read_requests | 274985 | | Innodb_buffer_pool_reads | 1602 | | Innodb_buffer_pool_read_requests | 267139 | | Innodb_buffer_pool_reads | 1562 | | Innodb_buffer_pool_read_requests | 270779 | | Innodb_buffer_pool_reads | 1731 | | Innodb_buffer_pool_read_requests | 287594 | | Innodb_buffer_pool_reads | 1567 | | Innodb_buffer_pool_read_requests | 282786 | | Innodb_buffer_pool_reads | 1754 | Innodb_buffer_pool_read_requests: page reads satisfied from memory (good)Innodb_buffer_pool_reads: page reads from disk (bad) As you may notice, we still get some reads from the disk, and we want to avoid them, so let’s increase the buffer pool size to 340G (90% of total memory) and check again: mysqladmin -r -i 1 -c 60 extended-status | egrep "Innodb_buffer_pool_read_requests|Innodb_buffer_pool_reads" | Innodb_buffer_pool_read_requests | 99937722883 | | Innodb_buffer_pool_reads | 599056712 | | Innodb_buffer_pool_read_requests | 293642 | | Innodb_buffer_pool_reads | 1 | | Innodb_buffer_pool_read_requests | 296248 | | Innodb_buffer_pool_reads | 0 | | Innodb_buffer_pool_read_requests | 294409 | | Innodb_buffer_pool_reads | 0 | | Innodb_buffer_pool_read_requests | 296394 | | Innodb_buffer_pool_reads | 6 | | Innodb_buffer_pool_read_requests | 303379 | | Innodb_buffer_pool_reads | 0 | Now we are barely going to disk, and IO pressure was released; this makes us happy –  right? Summary If you increase the memory size of a server, you mostly need to focus on innodb_buffer_pool_size, as this is the most critical variable to tune. Allocating 90% to 95% of total available memory on big systems is not bad at all, as OS requires only a few GB to run correctly, and a few more for memory swap should be enough to run without problems. Also, check your maximum connections required (and used,) as this is a common mistake causing memory issues, and if you need to run with 1000 connections opened, then allocating 90% of the memory of the buffer pool may not be possible, and some additional actions may be required (i.e., adding a proxy layer or a connection pool). From MySQL 8, we have a new variable called innodb_dedicated_server, which will auto-calculate the memory allocation. While this variable is really useful for an initial approach, it may under-allocate some memory in systems with more than 4G of RAM as it sets the buffer pool size = (detected server memory * 0.75), so in a 200G server, we have only 150 for the buffer pool. Conclusion Vertical scaling is the easiest and fastest way to improve performance, and it is also cheaper – but not magical. Tuning variables properly requires analysis and understanding of how memory is being used. This post focused on the essential variables to consider when tuning memory allocation, specifically innodb_buffer_pool_size and max_connections. Don’t over-tune when it’s not necessary and be cautious of how these two affect your systems.
  2. Like always I am sharing new things I learn here on my blog. I was recently working on a requirement for a LAMP stack web application reporting dashboard in which I needed to store – and eventually – retrieve a .pdf file. I have read in several places (this fantastic book is a great resource) that a viable option is storing images or documents (.pdf in this case) in the actual database table as opposed to on the server file system. MySQL has the BLOB datatype that can be used to store files such as .pdf, .jpg, .txt, and the like. In this blog post, I cover how I accomplished uploading and storing the actual .pdf file in a BLOB column in MySQL using PHP. Any corrections, tips, pointers, and recommendations for best practices are always welcome. We all learn as we go!!! Photo by Laika Notebooks on Unsplash Self-Promotion: If you enjoy the content written here, by all means, share this blog and your favorite post(s) with others who may benefit from or like it as well. Since coffee is my favorite drink, you can even buy me one if you would like! I am using a table named ‘project_pdf’ with 3 columns (see accompanying screenshot) to store the data: ‘id’: type INTEGER ‘project_name’: type TEXT ‘pdf_doc’: type BLOB Table structure for the project_pdf table. With the below simple HTML web form, we can collect the ‘project_name’ and enable the .pdf file attachment upload: Simple web form to upload a pdf to the database. Below is the HTML source code for the above web form:     <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css" integrity="sha384-JcKb8q3iqJ61gNV9KGb8thSsNjpSL0n8PARn9HuZOnIxN0hoP+VmmDGMN5t9UJ0Z" crossorigin="anonymous"/>     <title>Upload PDF</title>     </p><h4 class="text-center" style="margin-top: 100px;">Upload A PDF To The Database</h4> <div class="d-flex justify-content-center align-self-center" style="margin-top: 115px;">     <form action="submit.php" method="POST" accept-charset="utf-8" enctype="multipart/form-data">         <div class="formgroup container-fluid">             <label for="project_name">Project Name</label>             <input type="text" name="project_name"/>         </div>         <div class="formgroup container-fluid">             <input type="file" name="pdf_file" accept=".pdf"/>             <input type="hidden" name="MAX_FILE_SIZE" value="67108864"/> <!--64 MB's worth in bytes-->         </div>         <div class="formgroup container-fluid">             <label for="submit">Submit To Database</label><br />             <input type="submit" name="submit"/>         </div>     </form> </div>     <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script>     <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js" integrity="sha384-9/reFTGAW83EW2RDu2S0VKaIzap3H66lZH81PoYlFhbGU+6BZp6G7niu735Sk7lN" crossorigin="anonymous"></script>     <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js" integrity="sha384-B4gt1jrGC7Jh4AgTPSdUtOBvfO8shuf57BaghqFfPlYxofvL8/KUEfYiJOMMV+rV" crossorigin="anonymous"></script> Looking at the code… We will use this PHP code stored in a script submit.php – which is the form action – along with some best practices utilizing if/else conditional blocks and wrapping all of the database operations in a try/catch block to complete the .pdf INSERT in the MySQL BLOB column: <!?phpif (isset($_POST['submit']) && !empty($_FILES['pdf_file']['name'])) {      //a $_FILES 'error' value of zero means success. Anything else and something wrong with attached file.     if ($_FILES['pdf_file']['error'] != 0) {         echo 'Something wrong with the file.';     } else { //pdf file uploaded okay.         //project_name supplied from the form field         $project_name = htmlspecialchars($_POST['project_name']);         //attached pdf file information         $file_name = $_FILES['pdf_file']['name'];         $file_tmp = $_FILES['pdf_file']['tmp_name'];         if ($pdf_blob = fopen($file_tmp, "rb")) {             try {                 include __DIR__."/includes/DatabaseConnection.php";                 $insert_sql = "INSERT INTO `project_pdf` (`project_name`, `pdf_doc`)                               VALUES(:project_name, :pdf_doc);";                 $stmt = $pdo->prepare($insert_sql);                 $stmt->bindParam(':project_name', $project_name);                 $stmt->bindParam(':pdf_doc', $pdf_blob, PDO::PARAM_LOB);                 if ($stmt->execute() === FALSE) {                     echo 'Could not save information to the database';                 } else {                     echo 'Information saved';                 }             } catch (PDOException $e) {                 echo 'Database Error '. $e-&gt;getMessage(). ' in '. $e-&gt;getFile().                     ': '. $e-&gt;getLine();             }         } else {             //fopen() was not successful in opening the .pdf file for reading.             echo 'Could not open the attached pdf file';         }     }} else {     //submit button was not clicked. No direct script navigation.     header('Location: choose_file.php');} Verify the button is clicked and a file is attached The first if/else block verifies that the ‘submit’ button from the form has been clicked and that the ‘pdf_file’ field has an attachment using the PHP functions isset() and empty() (the converse of truth by negating with the not ! operator for the empty() function): 1 isset($_POST['submit']) && !empty($_FILES['pdf_file']['name'])) Tip: More validation can be implemented here to verify the file type is an actual .pdf since that file type is what we are expecting to store in the database. Informational: Visit the official PHP online documentation for more information on isset() and empty(). Check PHP $_FILES array for errors The $_FILES array provides several related error codes for file attachments. A value of 0 (zero) means everything is okay with the file and it is successfully uploaded. In this particular if/else, block if that value is not 0 (zero), then we echo in the browser that something is wrong with the file upload: 1 $_FILES['pdf_file']['error'] != 0 Related: See the official PHP online documentation for more information on file upload errors. PHP $_POST and $_FILES Data The $_POST associative array has the value for the ‘project_name’ input field captured in the form and sent through the HTTP POST method. Likewise, the $_FILES associative array has several values for a file or attachment. I am assigning 2 of them to variables, but using only one – ['tmp_name'] – in the subsequent code: ['name'] – The actual file name from the client. (Could be used in a file name column if needed ['tmp_name'] – Temporary file name of the uploaded file as stored on the server. 123 $project_name = htmlspecialchars($_POST['project_name']);$file_name = $_FILES['pdf_file']['name'];$file_tmp = $_FILES['pdf_file']['tmp_name']; Related: Read more about POST upload methods in the official online PHP documentation. Read in .pdf binary data and prepare to store in MySQL BLOB column with PHP The call to fopen() reads in the file in binary format ("rb") into a ‘$pdf_blob’ variable. If fopen() cannot open the file, this if/else block echo‘s the message in the else block to the browser: 1 $pdf_blob = fopen($file_tmp, "rb") MySQL database connection and prepared statements Finally, we look at the entire try/catch block. I have all database connection information stored in a separate file named DatabaseConnection.php and include it in the script at this point using the include directive. Since we are introducing user-supplied input from the web form into the database, we use prepared statements leveraging the PHP PDO methods: prepare() bindParam() execute() INSERT .pdf file into MySQL BLOB column with PHP If the call to execute() is not successful, we echo a message to the browser that the information could not be saved. Otherwise, the data is successfully inserted and we echo ‘Information saved’: 12345678910111213141516171819 try {     include __DIR__."/includes/DatabaseConnection.php";     $insert_sql = "INSERT INTO `project_pdf` (`project_name`, `pdf_doc`)                   VALUES(:project_name, :pdf_doc);";     $stmt = $pdo->prepare($insert_sql);     $stmt->bindParam(':project_name', $project_name);     $stmt->bindParam(':pdf_doc', $pdf_blob, PDO::PARAM_LOB);     if ($stmt->execute() === FALSE) {         echo 'Could not save information to the database';     } else {         echo 'Information saved';     }} catch (PDOException $e) {     echo 'Database Error '. $e->getMessage(). ' in '. $e->getFile().         ': '. $e->getLine();    } Note: It is generally not a good practice to echo any sort of database error information to the browser. Errors should be written to a log file instead. However, for the purpose of this blog post as a learning experience, I echo out any exceptions that may arise in the catch block. Using the form to store .pdf in MySQL BLOB column with PHP Let’s try out the form and PHP code to verify the upload works. Here is a simple sample .pdf file I created for a demo run: Contents of SamplePDF.pdf document to upload with form. See this screenshot in which I fill in the ‘project name’ field with a generic ‘First Project’ name and attach the SimplePDF.pdf file: Filled out web form with pdf attachment for upload. Upon clicking the ‘Submit’ button, the information is successfully stored in the database and the success message is displayed in the browser: Message displayed in the browser after successful pdf upload. Here is the data saved in the MySQL database table from the successful upload via our web form and the PHP code: The project_pdf table with inserted pdf and project name. It works!!! In the next blog post, I will cover how to retrieve the .pdf file from the database and display it in the browser. If you see anything in the code that I can improve on or any mistake, please let me know via the comments section below. Additional PHP/MySQL Reading Be sure and visit these other blog posts I have written on PHP and MySQL: PHP PDO lastInsertId() method with examples in MySQL Beginning Perspective on PHP Arrays Sorting associative arrays in PHP with array_multisort() – New learning Dynamic HTML drop-down with PHP and MySQL Like what you have read? See anything incorrect? Please comment below and thanks for reading!!! A Call To Action! Thank you for taking the time to read this post. I truly hope you discovered something interesting and enlightening. Please share your findings here, with someone else you know who would get the same value out of it as well. Visit the Portfolio-Projects page to see blog post/technical writing I have completed for clients. To receive email notifications (Never Spam) from this blog (“Digital Owl’s Prose”) for the latest blog posts as they are published, please subscribe (of your own volition) by clicking the ‘Click To Subscribe!’ button in the sidebar on the homepage! (Feel free at any time to review the Digital Owl’s Prose Privacy Policy Page for any questions you may have about: email updates, opt-in, opt-out, contact forms, etc…) Be sure and visit the “Best Of” page for a collection of my best blog posts. Josh Otwell has a passion to study and grow as a SQL Developer and blogger. Other favorite activities find him with his nose buried in a good book, article, or the Linux command line. Among those, he shares a love of tabletop RPG games, reading fantasy novels, and spending time with his wife and two daughters. Disclaimer: The examples presented in this post are hypothetical ideas of how to achieve similar types of results. They are not the utmost best solution(s). The majority, if not all, of the examples provided, is performed on a personal development/learning workstation-environment and should not be considered production quality or ready. Your particular goals and needs may vary. Use those practices that best benefit your needs and goals. Opinions are my own. The post Use MySQL BLOB column with PHP to store .pdf file appeared first on Digital Owl's Prose.
  3. Today there is NEW ERA for Galera Cluster monitoring and management and as we release Galera Manager 1.0-beta into the wild for everyone to evaluate, test, and deploy their Galera Clusters within an Amazon Web Services (AWS) Elastic Compute Cloud (EC2) environment to achieve MySQL High Availability, Multi-Master MySQL on the cloud and Disaster Recovery, all from the comfort of a web-based graphical user interface (GUI). What does Galera Manager do? Galera Manager is a deployment, management and monitoring solution for Galera Clusters. A user can easily create clusters, add and remove nodes, and create geo-distributed clusters across multiple AWS regions, all with the click of a few buttons in one’s web browser. Even more useful is the over 620 monitoring metrics available to monitor the health of your clusters. Being fully web-based, you can say goodbye to having to access a console, configure the my.cnf’s individually, and bootstrap a cluster from scratch. Galera Manager brings Galera Cluster for the rest of us, beyond just DBAs! Galera Manager is the tool to deploy Galera Clusters within an AWS EC2 environment. You can start with t2.micro instances for all your needs during evaluation, but naturally you are expected to launch more powerful instances for production use cases. Currently, Galera Manager allows you to create a new Galera Cluster with either MySQL 5.7 or MySQL 8.0. It supports the base operating system to be CentOS 7, Ubuntu 18.04 LTS and Ubuntu 20.04 LTS. It is well documented: The Galera Manager and should be considered the prime place to get information about Galera Manager. Let’s get started with an install! You will need four (4) t2.micro instances, for when it comes to budgeting, but this is also AWS free-tier eligible. Create an EC2 node with SSH access in any region that you prefer. For simplicity, Amazon Linux 2 was chosen as the base operating system (as it remains largely compatible with CentOS). For ease of use, run this as root (sudo su or the likes will suffice). Once SSH’ed into your instance, you will need to download the installer: wget https://galeracluster.com/galera-manager/gm-installer After that, you should make it executable, by executing: chmod +x gm-installer. Start the installer via: ./gm-installer install. You will have to accept the End User License Agreement (EULA). Most things can be left at a default, and you can follow on the install documentation, but it is worth noting that when it comes to domains & certificates, if you choose to enter an IP address (quite the default when you’re testing and using AWS), you will get a warning stating: Since you entered an IP address, SSL won't be available. There are some implications for this, i.e. everything going around in plaintext, including your AWS credentials, over HTTP as opposed to HTTPS. However, you will get setup for evaluation a lot faster. Once all that is done, wait for the installer to complete. This takes a few minutes. You will see a message displayed as such (your IP address or hostname will vary of course): INFO[0143] Galera Manager installation finished. Enter http://34.228.140.255 in a web browser to access. Please note, you chose to use an unencrypted http protocol, such connections are prone to several types of security issues. Always use only trusted networks when connecting to the service. You will also note that there are Logs and Metrics database URLs, and you can safely ignore this. You do however need to open up some ports within the firewall within Amazon (you do this via Security Groups). The ports in question are: 80, 443, 8091, 8092 and we have documented it at AWS Ports with Galera Manager. This happens immediately, and it will be in addition to port 22 (SSH) which is already open. Now you can access your Galera Manager via any web browser. Login with the credentials you provided earlier, and you can get started in creating your first 3-node cluster. Start first by creating a cluster. While you can provide a “Custom DB engine configuration”, I would advise against this as you are trying to get started as quickly as possible. However if you choose to do so, please refer to the guide in Deploying a Cluster in Galera Manager. Pick your AWS region for the initial cluster setup, configure what DB engine you want, what the host OS should be running and get your AWS Access Key ID and AWS Secret Access Key from your AWS console. For more information, do read: Understanding and getting your AWS credentials (however we also do cover it in the above linked documentation piece). It is crucial to provide an SSH authorised key to ensure that you are able to login to the servers that your Galera Clusters are being deployed. Once your cluster is created, you need to create some nodes within it. Galera Cluster runs best with a minimum of 3-nodes, and it is recommended that you deploy as such. You can opt to automatically start nodes after deployment. You can also choose to segment it here, but generally, the defaults are all you will need to get going. Here you can also choose your instance type (note that this is different from where you installed Galera Manager; the software itself runs fine with a t2.micro, but your needs may vary when it comes to a production cluster as AWS EC2 has multiple instance types that can be suited to your workload). Your region can also be different from where you installed Galera Manager (you might notice that it defaults to Frankfurt). As for the host type, you’ll notice that there is ec2 as the default and locallxd as well; for the purposes of this, you should just leave it to ec2 (though this helps you see where we are headed to from a roadmap standpoint). Once you click Deploy, you’ll notice that the install starts. You can switch tabs at this point while the deployment phase happens. This can take anywhere between 5-20 minutes. If you are wondering why the timing is so non-deterministic, it has a lot to do with all the packages that are being installed and latency between your instances and the repositories. The installer also wants to ensure that there are no failures, so on a slower instance (sometimes you have a noisy neighbour), there will be a lot more “sleeps” built-into the software to ensure proper execution. The good news is that you will see individual ETAs for your instance deploys. You can also hop on over to see individual logs to see how instance installs are progressing. Once the deploy is completed, you will be presented with your new managed cluster. You will see tabs for monitoring, logs, configuration (which allows you to find out hostname information so you can use the mysql client to connect to or SSH into the server), as well as jobs that are running. Congratulations, you now have a 3-node Galera Cluster deployed entirely using the GUI tool, Galera Manager. In future blog posts we will cover adding nodes, removing nodes, dropping into your cluster via SSH, and more. We are looking for feedback on this beta, so please do email: info@codership.com. We are also constantly going to release updates to this release. Being a beta, there are several known bugs naturally, some of which include debug logging turned on by default. Toggles will come to improve this as we make the next release. Being web-based software we plan to do a lot more releases and deploys constantly.
  4. I’ve been experimenting with the mysqlsh since installing it last week. It’s been interesting. Overall, I’m totally impressed but I did find a problem with how it parses stored procedures. First thought is always, is it my code? I checked the file by running it as a script file through MySQL Workbench. It ran perfectly in MySQL Workbench but failed repeatedly when run from the mysqlsh utility. Next step, reduce the code to a small test case, retest it, and log a bug if it is replicated. My test case in a test.sql file generates the following errors when run from the mysqlsh utility: MySQL localhost:33060+ ssl studentdb SQL > source test.sql Query OK, 0 rows affected (0.0003 sec) ERROR: 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CREATE PROCEDURE test ( pv_input1 CHAR(12) , pv_input2 CHAR(19)) MODIFIES SQL ' at line 2 Query OK, 0 rows affected (0.0002 sec) Query OK, 0 rows affected (0.0003 sec) ERROR: 1305: PROCEDURE studentdb.test does not exist The same file generates this series of successful messages when run as a script file from MySQL Workbench: Preparing... Importing test.sql... Finished executing script Statement CREATE PROCEDURE test pv_input1 One Operation completed successfully For those who are curious enough to review the test case, here it is: -- Reset the delimiter so that a semicolon can be used as a statement and block terminator. DELIMITER $$ SELECT 'CREATE PROCEDURE test' AS "Statement"; CREATE PROCEDURE test ( pv_input1 CHAR(12) , pv_input2 CHAR(19)) MODIFIES SQL DATA BEGIN SELECT CONCAT(pv_input1,', ',pv_input2) AS message; END; $$ -- Reset the standard delimiter to let the semicolon work as an execution command. DELIMITER ; -- Call the test procedure. CALL test('One','Two'); The reply in the bug explained the behavior difference between MySQL Workbench and the MySQL Shell (mysqlsh) environments. MySQL Workbench uses the MySQL client, which supports multiple client statements with the CLIENT_MULTI_STATEMENTS option. Recognizing that, the logging entry SELECT statement should move to a position before setting the DELIMITER, like: -- Set a label for the log file. SELECT 'CREATE PROCEDURE test' AS "Statement"; -- Reset the delimiter so that a semicolon can be used as a statement and block terminator. DELIMITER $$ CREATE PROCEDURE test ( pv_input1 CHAR(12) , pv_input2 CHAR(19)) MODIFIES SQL DATA BEGIN SELECT CONCAT(pv_input1,', ',pv_input2) AS message; END; $$ -- Reset the standard delimiter to let the semicolon work as an execution command. DELIMITER ; -- Call the test procedure. CALL test('One','Two'); The new test case only submits one statement at a time. The logging query is submitted by the semicolon, and the test procedure by the double dollar ($$) symbol set. So, I was correct identifying a parsing behavior difference between MySQL Workbench and MySQL Shell. It appears to be a difference by design but the MySQL Shell documentation fails to explain it can’t manage multiple statements. I hope identifying this saves others time. It’s also true that the MySQL client software supports TEE and NOTEE to write log files. Unfortunately, MySQL Shell (mysqlsh) doesn’t support the TEE and NOTEE syntax. You can only do minimal logging with the control of standard error (stderr) by using the application and AdminAPI log utilities, which are covered in Chapter 8 of the MySQL Shell 8.0 documentation.
  5. It’s always interesting when I upgrade from one release to the next. I learn new things, and in the case of MySQL’s installation and maintenance I become more grateful for the great team of developers working to produce MySQL 8. A warning that caught my eye in MySQL 8 (8.0.21) was this one on Unicode with the utf8 character code: Warning (code 3719): 'utf8' is currently an alias for the character set UTF8MB3, but will be an alias for UTF8MB4 in a future release. Please consider using UTF8MB4 in order to be unambiguous. Currently, a character alias for utf8mb3 is an alias for the deprecated utf8mb3 (a 3-byte character set) until it is removed. When the utf8mb3 character set is removed in a subsequent release the utf8 alias will represent the utf8mb4 (a 4-byte character set), which includes BMP and supplemental character support. It appears to me that its probably a good time to proactively provision disk space for an upgrade to utf8mb4 (a 4-byte character set). It also probably time to use the utf8mb4 character set rather than the utf8 character set alias. The MySQL Server team wrote a blog entry on the when to use which one. Naturally, all this will have a substantial impact on disk space allocated to the database.