Tags Archives: database

AWS DB Parameters

Database or DB parameters specify how your database is configured. For example, database parameters can specify the amount of resources, such as memory, to allocate to a database.

 

You manage your database configuration by associating your DB instances and Multi-AZ DB clusters with parameter groups. Amazon RDS defines parameter groups with default settings.

 

A DB Parameter Group is a collection of engine configuration values that you set for your RDS database instance.

 

It contains the definition of what you want each of these over 400 parameters to be set to.

 

By default, RDS uses a default parameter group specified by AWS. It is not actually necessary to use a different parameter group.

 

Each default parameter group is unique to the database engine you select, the EC2 compute class, and the storage allocated to the instance.

 

You cannot change a default parameter group, so if you want to make modifications then you will have to create your own parameter group.

 

RDS database engine configuration is managed through the use of parameters in a DB parameter group.

 

DB parameter groups serve as an effective container for holding engine configuration values that are applied to your DB instances.

 

A default DB parameter group is created if you make a database instance without specifying a custom DB parameter group. This default group contains database engine defaults and Amazon RDS system defaults based on the engine, compute class, and allocated storage of the instance.

 

 

When you create a new RDS database, you should ensure you have a new custom DB parameter group to use with it. If not then you might have to perform an RDS instance restart later on to replace the default DB parameter group, even though the database parameter you want to alter is dynamic and modifiable.

 

This is the best approach that gives you flexibility further down the road to change your configuration later on.

 

Creating your own parameter group can be done via the console or the CLI. Self-created parameter groups take as their basis the default parameter group for that particular instance and selected db engine.

 

After creation, you can then modify the parameters via the console or CLI to suit your needs as you change.

 

Parameters can either be static or dynamic.

 

Static means that changes won’t take effect without an instance restart.

 

Dynamic means a parameter change can take effect without an instance restart.

 

Dynamic parameters are either session scope or global scope.

 

Global scope dynamic parameters mean that changes will impact the entire server and all sessions.

 

Session scope dynamic parameters however are only effective for the session where they were set.

 

Note however that some parameter variables can have both global and session scope.

 

In these cases, the global value is used as the default for the session scope and any global change to a parameter that also has a session scope will only affect new sessions.

 

Another important aspect to bear in mind when creating a DB parameter group:

 

 

You should wait at least 5 minutes before creating your first DB instance that uses that DB parameter group as the default parameter group. This allows Amazon RDS to fully complete the create action before the parameter group is used as the default for a new DB instance.

 

 

For exam!

 

Important to know this DB Parameter above all for the exam:

 

for PostgreSQL and SQLServer:

 

rds.force_ssl=1

 

this forces ssl connections to be used

 

 

 

BUT – for MySQL /MariaDB you must instead use a grant select command:

 

GRANT SELECT ON mydatabase.* TO ‘myuser’@%’IDENTIFED BY ‘…’ REQUIRE SSL;

 

 

 

Continue Reading

AWS – Choosing an AWS Database

AWS offers several db solutions.

 

A number of questions need to be considered when choosing an AWS DB solution:

 

Questions to Consider

 

What is your need – read heavy, write heavy, balanced?

 

Throughput volume?

 

Will this change, does it need to scale, will it fluctuate up and down?, is it predictable or not?

 

How much data volume to store – and for how long

 

will it grow?

 

what is the average object size?

 

how do you need to access it?

 

what data durability do you require? what is the “source of truth” for the data?

 

are there compliance requirements?

 

what latency requirements – how many concurrent users

 

what is the data model – how will you query the data, will there be data joins? structured, semi-structured?

 

strong schema, flexible, reporting needs? searching? RBDMS or NoSQL?

 

what about license costs – cloud native db such as Aurora possible or not?

 

Overview of Database Types on AWS

 

RDBMS such as sql, oltp, this means: RDS, or Aurora, esp good for joins

NoSQL DBs such as DynamoDB – json, elasticache – key/value pairs or Nepture – good for graphs, but no joins and no sql

 

Object Stores: S3 for big objects, Glacier for backup and archives – may not seem like a DB but it works like one

 

Data Warehouse solutions eg SQL Analytics/BI, Redshift (OLAP), Athena

 

Search solutions: eg ElasticSearch (json), for free text unstructureed searches

 

Graph solutions: Neptune – this displays relationships between data

 

 

Overviews of AWS DB Solutions

 

RDS Overview

 

its a managed db on the postgresql/myswl/Oracle/SQL level

 

you must however an ec2 instance and ebs vol type and sufficient size

 

it supports read replicas and multi-AZ
security is via iam and security groups, kms, and ssl in transit
backup, snapshot and point in time restores all possible

 

managed and scheduled maintanance

 

monitoring available via cloudwatch

 

use cases include:

 

storing relational datasets rdbms/oltp performing sql queries, transactional inserts, update, delete is possible

 

rds for solutions architect, considerations include these “5 pillars”:

 

operations
security
reliability
performance
cost

 

 

operations_ small downtimes when failover happens, when maintenance happens, when scaling read replicas, ec2 instances, and restoring from ebs, this requires manual intervention, and when application changes

 

 

security: aws is responsible for os security, but we are responsible for setting up kms, security groups, iam policies, authorizing users in db and using ssl

 

 

reliability: the multi-az feature makes rds v reliable, good for failover in failure situations

 

performance: dependent on ec2 instance type, ebs vol type, can add read replicas, storage autoscaling is possible, and manual scaling of instances is also possible

 

costs: is pay per hour based on provisioned number and type of ec2 instances and ebs usage

 

 

Aurora Overview

 

Compatible API for PostgreSQL and MySQL

 

Data is held in 6 replicas across 3 AZs
It has auto-healing capability
Multi-AZ Auto Scaling Read Replicas
Read Replicas can be global

 

Aurora DB can be global for DR or latency purposes
auto Scaling of storage from 10GB to 128 TB

 

Define EC2 instance type for Aurora instances

 

same security/monitoring/maintenance as for RDS but also has

 

Aurora Serverless for unpredictable/intermittent workloads
Aurora Multi-Master for continuous writes failover

 

use cases: same as for RDS but with less maintenance/more flexibility and higher performance

 

Operations: less operations, auto scaling storage

 

security: AWS as per usual, but we are responsible for kms, security groups, iam policy, user auth and ssl

 

reliability: multi AZ, high availability, serverless and multimaster options

 

performance 5x performance, plus max 15 read replicas (vs only 5 for rds)

 

cost: pay per hour acc to ec2 and storage usage, can be lower cost than eg oracle

 

 

ElastiCache Overview

 

it is really just a cache not a database

 

is a managed Redis or Memcached – similar to RDS but for caches

 

its an in-memory data store with very low latency
you must provision an ec2 instance type to use it

 

supports redis clustering and multi-az, plus read replicas using sharding

 

security is via iam, security groups, kms, and redis auth – NO IAM

 

backup, snapshot, point in time restores
managed and scheduled maintenance
monitoring via cloudwatch

 

use case: key/value store, frequent reads, less writes, cache results for db queries, storing of session data for websites, but cannot use sql – latter is important to be aware of.

 

you retrieve the data only by key value, you can’t query on other fields

 

 

operations: same as rds
security: usual can use iam policies,for users Redis Auth, and ssl

 

reliability: clustering and multi AZ

 

performance: in memory so extremely fast, read replicas for sharding, very efficient

 

cost: similar to rds pricing based on ec2 and storage usage

 

 

DynamoDB

 

proprietary to AWS
a managed NoSQL DB
serverless, provisioned, auto-scaling, on-demand capacity

 

can replace elasticache as a key-value store, eg for storing session data

 

performance is slower than eg rds

 

highly available, multi-AZ by default, read–writes decoupled, DAW available for read cache

 

2 options for reads: eventually consistent or strongly consistent

 

security, authorization-authentication all done via iam

 

dynamodb streams integrate with lambda

 

backup-restore and global table feature – if you enable streams

 

monitoring via cloudwatch

 

important
but you can only query on primary key, sort key or indexes – exam q!
so you cannot query on “any” attribute – only the above.

 

use case:

 

serverless apps development, small docs 100s kb, distrib serverless cache, doesn’t have sql query language available, has transaction capability now built in

 

 

 

S3 Overview

 

acts as a simple key/value store for objects

 

great for big objects up to 5TB, not so good for small objects

 

serverless, scales infinitely, strong consistency for every operation

 

tiers for migrating data: s3 standard, s3 IA, s3 one-zone IA, Glacier for backups

 

features include: versioning, encryption, CRR – cross-region replication

 

security: IAM, bucket policies, ACL

 

encryption: SSE-S3, SSE-KMS, SSE-C, client side encryption, SSL in transit

 

use case: static files, key-value stores for big files and website hosting

 

operations: no operations necessary!
security: IAM, bucket policies, ACL, encryption set up correctly,

 

reliability: extremely high, durability also extremely good, multi-AZ and CRR

 

performance: scales easily, very high read/write, multipart for uploads

 

cost: only pay for storage used, no need to provision in advance, plus network costs for transfer/retrieval, plus charge per number of requests

 

 

Athena Overview

 

fully serverless database with SQL capability
used to query data in S3
pay per query
outputs results back to S3
is secured via IAM

 

use cases: one-time sql queries, serverless queries on S3, log analytics

 

 

Operations: no ops needed! is serverless

 

security: via S3 using bucket policies, IAM

 

reliability: is managed, uses Presto engine, highly available
performance: queries scale based on data size

 

cost: pay per query/per TB of data scanned, serverless

 

 

Redshift Overview

 

is a great data warehouse system

 

based on PostgreSQL but not used for oltp

 

instead, its an OLAP – online analytical processing — analytics and data warehousing

 

10x better performance than other data warehouses, scales to PBs of data

 

columnar storage of data rather than row-based

 

it is MPP – uses a massively parallel query execution engine which makes it extremely fast

pay as you go acc to no of instances provisioned
has an SQL interface for performing the queries.

 

data is loaded from S3, DynamoDB, DMS and other DBs,
1 to 128 nodes possible with up to 128 TB of space per node!

 

leader node used for query planning and results aggregation
compute node : performs the queries and sends results to leader node

 

Redshift Spectrum: performs queries directly against S3 with no load to load the data into the redshift cluster

 

Backup & Restore, SecurityVPC, IAM, KMS, monitoring

 

Redshift Enhanced VPC Routing: Copy / Unload goes through VPC, this avoids public internet

 

Redshift Snapshots and DR

 

has no multi-AZ node

 

the snapshots are p-i-t- point in time backups of a cluster, stored internally in S3

 

they are incremental — only changes are saved, makes it fast

can restore to a new cluster

automated snapshots: every 8 hrs every 5gb or according to a schedule, with a set retention period

 

manual snapshots: snapshot is retained until you delete it

 

neat feature:
you can configure Redshift to auto copy snapshots to a cluster of another region
either manually or automatically

 

 

loading data into redshift:

 

there are three possible ways:

 

1. use kinesis data firehouse loads data into redshift cluster via an s3 copy automatically to an S3 bucket

 

2. using the copy command manually without Kinesis
from S3 bucket via internet – ie without using enhanced vpc routing to redshift cluster
or via vpc not via internet, using enhanced vpc routing

 

 

3. from ec2 instance to redshift cluster using jdbc driver
in this case it is much better to write the data in batches rather than all at once.

 

 

Redshift Spectrum

 

must already have a redshift cluster operational

 

Spectrum is a way to query data that is already in S3 without having to load it into Redshift

 

you submit the query which is submitted to thousands of Redshift spectrum nodes for processing

 

 

Operations: similar to rds
security: uses iam, vpc, kms, ssl as for rds
reliability: auto healing, cross region snapshop copies possible
performance: 10x performance of other data warehousing systems, uses compression
cost: pay per node provisioned about 10% of cost of other dw systems
vs athena: is faster at querying, does joins, aggregations thanks to indexes

 

redshift = analytics/BI/Data Warehouse

 

 

Glue Overview

 

is a managed extract transform and load ETL service

 

used to prepare and transform data for analytics
fully serverless service

 

fetch data from s3 bucket or rds, send to glue which does extracting, transforming and loading to redshift data warehouse

 

glue data catalog: catalog of all the datasets you have in aws – ie metadata info about your data

 

you deploy glue data crawler to navigate s3, rds, dynamo DB, it then writes the metadata to the glue data catalog.

 

this can then be used b Glue Jobs eTL

 

data discovery -> Athena, Redshift Spectrum, EMR for analytics

 

all you need to know about Glue for exam

 

Finally,

 

Neptune Overview

 

tip:
if you hear graphs mentioned in exam, this refers to neptune!

 

is a fully managed graph database

 

used for
high relationship data
social networking, eg users friends with users, replied to comment on post of user and likes other comments

 

knowledge graphs eg for wikipedia – links to other wiki pages, lots of these links
this is graph data – giving a massive graph

 

highly available across 3 AZs with up to 15 read replicas available

 

point in time recovery, continuous backup to S3

 

support for kms encryption, https, and ssl

 

operations: similar to rds
security: iam, vpc, kms, ssl – similar to rds, plus iam authentication
reliability: multi-AZ and clustering
performance: best suited for graphs, clustering to improve performance

 

cost: similar to rds, pay per provisioned node

 

remember neptune = graphs database

 

 

AWS OpenSearch

 

– the successor to AWS ElasticSearch

 

eg dynamodb only allows search by primary key or index,

 

whereas with OpenSearch you can search ANY field

 

often used as a complement to other db

 

also has usage for big data apps
can provision a cluster of instances
built-in integrations: various – kinesis data firehose, aws IoT, CloudWatch Logs

 

 

comes with visualization dashboard

 

operations: similar to rds

 

security: is via Cognito, IAM, KMS encryption, SSL and VPC
reliability: multi-AZ and clustering
performance: based on elasticsearch project – open source, pbyte scale
cost: pay per provisioned node

 

all you need to remember:

 

used to search and index data

 

Question 1:
Which database helps you store relational datasets, with SQL language compatibility and the capability of processing transactions such as insert, update, and delete?

 

RDS

 

Question 2:
Which AWS service provides you with caching capability that is compatible with Redis API?

 

ElastiCache

Good job!
Amazon ElastiCache is a fully managed in-memory data store, compatible with Redis or Memcached.

 

 

Question 3:
You want to migrate an on-premises MongoDB NoSQL database to AWS. You don’t want to manage any database servers, so you want to use a managed NoSQL database, preferably Serverless, that provides you with high availability, durability, and reliability. Which database should you choose?

 

Amazon DynamoDB

 

Good job!

Amazon DynamoDB is a key-value, document, NoSQL database.

 

 

Question 4:
You are looking to perform Online Transaction Processing (OLTP). You would like to use a database that has built-in auto-scaling capabilities and provides you with the maximum number of replicas for its underlying storage. What AWS service do you recommend?

 

Amazon Aurora

 

Good job!

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database. It features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 128TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across 3 AZs.

 

 

 

Question 5:
As a Solutions Architect, a startup company asked you for help as they are working on an architecture for a social media website where users can be friends with each other, and like each other’s posts. The company plan on performing some complicated queries such as “What are the number of likes on the posts that have been posted by the friends of Mike?”. Which database do you recommend?

 

Amazon Neptune

 

Good job!
Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets.

 

 

Question 6:
You have a set of files, 100MB each, that you want to store in a reliable and durable key-value store. Which AWS service do you recommend?

 

Amazon S3

 

Good job!
Amazon S3 is indeed a key-value store! (where the key is the full path of the object in the bucket)

 

Question 7:
You would like to have a database that is efficient at performing analytical queries on large sets of columnar data. You would like to connect to this Data Warehouse using a reporting and dashboard tool such as Amazon QuickSight. Which AWS technology do you recommend?

 

Amazon Redshift

 

Good job!
Amazon Redshift

 

 

Question 8:
You have a lot of log files stored in an S3 bucket that you want to perform a quick analysis, if possible Serverless, to filter the logs and find users that attempted to make an unauthorized action. Which AWS service allows you to do so?

 

Amazon Athena

 

Good job!
Amazon Athena is an interactive serverless query service that makes it easy to analyze data in S3 buckets using Standard SQL.

 

 

Question 9:
As a Solutions Architect, you have been instructed you to prepare a disaster recovery plan for a Redshift cluster. What should you do?

 

enable automated snapshops, then configure your redshift cluster to autocopy the snapshots to another aws region

 

Good job!

 

 

Question 10:
Which feature in Redshift forces all COPY and UNLOAD traffic moving between your cluster and data repositories through your VPCs?

 

 

Enhanced VPC Routing

 

Good job!

 

Question 11:
You are running a gaming website that is using DynamoDB as its data store. Users have been asking for a search feature to find other gamers by name, with partial matches if possible. Which AWS technology do you recommend to implement this feature?

 

ElasticSearch

 

Good job!
Anytime you see “search”, think ElasticSearch.

 

 

Question 12:
An AWS service allows you to create, run, and monitor ETL (extract, transform, and load) jobs in a few clicks.

 

AWS Glue

 

Good job!
AWS Glue is a serverless data-preparation service for extract, transform, and load (ETL) operations.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Continue Reading

MySQL-WordPress Problem: “Error Connecting to Database”

One day without warning and for no clear reason, my website suddenly started sending “error connecting to database” messages in web-browser for http://kevwells.com. 

 

 

No content was displayed. The WordPress WP-Admin could not be accessed either. Big Problem!

 

Non-Wordpress html pages were displaying and functioning ok, so it was obviously a Mysql-Wordpress communication issue.

 

Also pure PHP pages also displayed without problem – eg phpmyadmin, nextcloud, checkmk. So PHP was running ok.

 

 

Possible Causes of the Problem

 

 

Most support suggestions list 3 main possibilities:

 

Your database login credentials are wrong or have been changed.
Your database server is unresponsive.
Your database has been corrupted.

 

I checked, checked, and rechecked all this in line with all the suggestions – and found everything to be order. No errors, no mistakes, nothing missing.

 

Searching online for possible alternative explanations for this problem yielded little of use. Most suggestions simply regurgitated the standard textbook stuff – check login credentials, check database is running, check apache is running. Check the database isn’t corrupted.

 

I encountered other people who also faced this sudden same problem – so I was not alone with this issue – but they were as stuck as I was. 

 

 

However, there was a fourth possibility which no-one appeared to have mentioned and did not seem to have occurred to anyone: 

 

WordPress plugins causing the problem.

 

 

After extensive investigation I ascertained that this was indeed the cause in this case.

 

The solution was to move plugins to a new directory: plugins.disabled.

 

Then move plugins one by one back to the original plugin directory, login to the wp-admin and activate one by one, test the website content functionality and move plugins back one by one, repeating the procedure for each in turn.

 

 

 

Below are the steps I used in analysing this problem:

 

Check MySQL is running and listening on the correct port

 

First check the mysqld or mysql service is up and running, and that apache is running:

 

ps -ef | grep apache

 

ps -ef | grep mysql

 

also check they are listening for connections, and listening on the correct ports:

 

netstat -tulpn

 

Check version of mysql with:

 

root@gemini:/var/www/kevwells.com# mysql –version
mysql Ver 15.1 Distrib 10.3.32-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
root@gemini:/var/www/kevwells.com#

 

 

Check your WordPress wp-config.php is correct and does not contain any errors

 

Next check the wp-config.php definitions for the mysql database. These are essential definitions that must be present – substitute your own definitions for your system and make sure they are correct:

 

Check your wp-config.php file in your WordPress document root directory.

 

Your wp-config.php should contain following:

 

// ** MySQL settings – You can get this info from your web host ** //
/** The name of the database for WordPress */
define( ‘DB_NAME’, ‘database_name_here’ );
/** MySQL database username */
define( ‘DB_USER’, ‘username_here’ );
/** MySQL database password */
define( ‘DB_PASSWORD’, ‘password_here’ );
/** MySQL hostname */
define( ‘DB_HOST’, ‘localhost’ );

 

 

How to Debug wp-config.php

 

The following code, inserted in your wp-config.php file, will log all errors, notices, and warnings to a file called debug.log in the wp-content directory.  It will also hide the errors so they do not interrupt page generation.

 

This code will need be to inserted before the comment /* That’s all, stop editing! Happy blogging. */ in the wp-config.php file.

 

 

// Enable WP_DEBUG mode
define( ‘WP_DEBUG’, true );

// Enable Debug logging to the /wp-content/debug.log file
define( ‘WP_DEBUG_LOG’, true );

// Disable display of errors and warnings
define( ‘WP_DEBUG_DISPLAY’, false );
@ini_set( ‘display_errors’, 0 );

// Use dev versions of core JS and CSS files (only needed if you are modifying these core files)
define( ‘SCRIPT_DEBUG’, true );

 

WP_DEBUG #WP_DEBUG
WP_DEBUG is a PHP constant (a permanent global variable) that can be used to trigger the “debug” mode throughout WordPress. It is assumed to be false by default and is usually set to true in the wp-config.php file on development copies of WordPress.

 

// This enables debugging.
define( ‘WP_DEBUG’, true );
// This disables debugging.
define( ‘WP_DEBUG’, false );

 

Obviously you do not want both true and false defined at the same time, so comment out the one you do not currently want to be active.

 

Note: The true and false values in the example are not surrounded by apostrophes (‘) because they are boolean (true/false) values. If you set constants to ‘false’, they will be interpreted as true because the quotes make it a string rather than a boolean.

 

It is not recommended to use WP_DEBUG or the other debug tools on live site; they are meant for local testing, debugging, and install stage only.

 

 

WP_DEBUG_LOG #WP_DEBUG_LOG

 

WP_DEBUG_LOG is a companion to WP_DEBUG that causes all errors to also be saved to a debug.log log file This is useful if you want to review all notices later or need to view notices generated off-screen (e.g. during an AJAX request or wp-cron run).

 

Note that this allows you to write to log file using PHP’s built in error_log() function, which can be useful for instance when debugging Ajax events.

 

When set to true, the log is saved to debug.log in the content directory (usually wp-content/debug.log) within your site’s filesystem. Alternatively, you can set it to a valid file path to have the file saved elsewhere.

 

define( ‘WP_DEBUG_LOG’, true );
-or-
define( ‘WP_DEBUG_LOG’, ‘/tmp/wp-errors.log’ );
Note: for WP_DEBUG_LOG to do anything, WP_DEBUG must be enabled (true). Remember you can turn off WP_DEBUG_DISPLAY independently.

 

 

 

Does the problem occur for /wp-admin/ as well?

 

Does it occur with webpages on the server which are not provided by the wordpress database? eg plain html, php-driven services, and the like, if there are any on the server?

 

eg checkmk, or nextcloud, or simple html pages.

 

 

Try Repairing the WordPress Database

 

 

If the error is only on the front end. try to repair the database by adding the following line of code in wp-config.php

 

define(‘WP_ALLOW_REPAIR’, true);

 

Then go to ‘http://www.yoursite.com/wp-admin/maint/repair.php’ and click ‘repair and optimize database’ button.

 

 

/usr/bin/mysqlcheck –all-databases –auto-repair

 

 

check:

 

mysqlshow -u database_username -p

 

 

root@gemini:~#
root@gemini:~# mysqlshow –verbose
+——————–+——–+
| Databases | Tables |
+— 

 

(DATA REMOVED FOR SECURITY REASONS) 

| 12 |

+——————–+——–+
9 rows in set.
root@gemini:~#

 

 

 

Check Your index.php is Present and Error-Free

 

Check the index.php is present and is the correct instance. This is important!

 

Whenever someone visits a web page on a WordPress site, the server attempts to run index.php or index.html from the defined WordPress document root directory.

 

Your website document root directory index.php contains the code that generates a WordPress site.

 

However there is also an index.php in the wp-content directory and in other locations. If you examine the index.php located under wp-content and other directories you will find that it contains no active code. It is simply a placeholder.

 

 

Without index.php, anyone could access your site’s /wp-content folder and see all of the media, files, and directories it contains.

 

The index.php file at this location and others functions like a privacy screen: it blocks visitors from directly accessing your directories.

 

Alternativey you can use the following directive in your .htaccess file to prevent directory listings even if that index.php file is not present:

 

Options –Indexes

 

However, be aware that you do NOT want this file in your main document root folder, else no one can access any website content!

 

 

Check Apache Can Connect To MySQL Server

 

Next, check your web-host can connect to your mysql server.

 

In PHP you can easily do this using the mysqli_connect() function. All communication between PHP and the MySQL database server takes place through this connection. Here’re the basic syntaxes for connecting to MySQL using MySQLi extensions:

 

Syntax: MySQLi, Procedural way
$link = mysqli_connect(“hostname”, “username”, “password”, “database”);

 

Syntax: MySQLi, Object Oriented way
$mysqli = new mysqli(“hostname”, “username”, “password”, “database”);

 

Create a new file called testconnection.php and paste the following code in it:

 

<?php
$link = mysqli_connect(‘localhost’, ‘username’, ‘password’);
if (!$link) {
die(‘Could not connect: ‘ . mysqli_error());

}
echo ‘Connected successfully’;
mysqli_close($link);
?>

 

(substitute localhost, username and password with the correct parameters for your site)

 

Then I called up the file in the webbrowser:

 

http://3.222.27.169/testconnection.php

 

Result, it displayed:

 

Connected successfully

 

 

So that means the connection parameters – dbhost name, username and password are ok.

 

 

Check the Error Logs for MySQL and Apache

 

 

If the configuration seems correct and the service is running, but your website still doesn’t load as it should, try checking the logs for any hints to as what might be the cause.

 

 

Error logs for Apache are located at /var/log/apache/error.log

 

Error logs for MySQL are located at /var/log/mysql/error.log

 

Should you not be able to find anything within the most recent logs, check the archived ones as well. To do this, use ‘zgrep’ with otherwise the same command as regular ‘grep’

 

sudo zgrep -i error /var/log/mysql/error.log.1.gz

 

Since the database under CentOS is named MariaDB instead of MySQL, the logs are also saved under a different name. You can search the logs with the following command.

 

sudo grep -i error /var/log/mariadb/mariadb.log

 

Debian systems also report MySQL events to /var/log/syslog, to filter out everything else, use ‘grep’ with two keywords separated by .* to express ‘and’ like in the command below.

 

sudo grep -i -E ‘mysql.*error’ /var/log/syslog

 

If you are having difficulties finding anything helpful, try different keywords such as ‘start’ to see when the service was last restarted, or ‘failed’ to find any less critical problems that might not be reported as errors.

 

When we try

 

http://3.222.27.169/wp-admin

 

 we see in /var/log/mysql/error.log:

 

2021-12-29 19:39:44 264 [Warning] Access denied for user ”@’localhost’ (using password: NO)
2021-12-29 19:40:22 270 [Warning] Access denied for user ”@’localhost’ (using password: NO)

 

so, it looks like there is some login issue for the mysql database user:

 

 

 

Check the MySQL Database

 

 

 

MariaDB []> SELECT User, Host FROM mysql.user;
+—————+———–+
| User | Host |
+—————+———–+
| * | % |
| * | localhost |
| * | localhost |
| * | localhost |
| * | localhost |
| * | localhost |
+—————+———–+
6 rows in set (0.000 sec)

MariaDB [kevwells]>

 

NOTE! Usernames have been removed from the above for security reasons.

 

Use the desc mysql.user; statement to display information about the table’s columns. Once you know the column name, you can run a query against a selected data.

 

For example, to get a list of all MySQL users accounts including information about the password and whether it is active or expired, you would use the following query:

 

SELECT User, Host, Password, password_expired FROM mysql.user;

 

 

MariaDB []> SELECT User, Host, Password, password_expired FROM mysql.user;
+—————+———–+——————————————-+——————+
| User | Host | Password | password_expired |
+—————+———–+——————————————-+——————+
| * | localhost | * | N |
| * | localhost | * | |
| * | localhost | * | N |
| * | localhost | * | N |
| * | % | * | N |
| * | localhost | * | N |
+—————+———–+——————————————-+——————+
6 rows in set (0.001 sec)

MariaDB []>

 

#NOTE! Usernames and passwords have been removed from the above for security reasons.

 

Show Users that Have Access to a Particular Database

 

The information about the database-level privileges is stored in the mysql.db table.
You can query the table to find out which users have access to a given database and the level of the privileges.

 

For example, to get a list of all users that have some level access to the database named db_name you would use the following query:

 

SELECT * FROM mysql.db WHERE Db = ‘db_name’\G;

 

 

MariaDB []> SELECT * FROM mysql.db WHERE Db = ‘*****’\G;
*************************** 1. row ***************************
Host: localhost
Db: ****
User: ***
Select_priv: Y
Insert_priv: Y
Update_priv: Y
Delete_priv: Y
Create_priv: Y
Drop_priv: Y
Grant_priv: N
References_priv: Y
Index_priv: Y
Alter_priv: Y
Create_tmp_table_priv: Y
Lock_tables_priv: Y
Create_view_priv: Y
Show_view_priv: Y
Create_routine_priv: Y
Alter_routine_priv: Y
Execute_priv: Y
Event_priv: Y
Trigger_priv: Y
Delete_history_priv: Y
1 row in set (0.001 sec)

ERROR: No query specified

MariaDB []>

 

 

NOTE! Usernames, dbname and passwords have been removed from the above for security reasons.

 

 

To fetch information only about the user accounts that have access to a given database, without displaying the privileges use:

 

SELECT db, host, user FROM mysql.db WHERE db = ‘db_name’;

 

 

MariaDB []> SELECT db, host, user FROM mysql.db WHERE db = ‘***’
-> ;
+———-+———–+—————+
| db | host | user |
+———-+———–+—————+
| * | localhost | * |
+———-+———–+—————+
1 row in set (0.001 sec)

MariaDB []>

 

NOTE: Sensitive information has been removed from the output for security reasons!

 

The following query will show you information about all databases and associated users:

 

SELECT db, host, user FROM mysql.db;

 

Try restoring database from a backup made using wp-db-backup plugin:

 

 

 

Put the backed-up SQL back into MySQL/MariaDB:
user@linux:~/files/blog> mysql -h mysqlhostserver -u mysqlusername -p databasename < blog.bak.sql

 

Enter password: (enter your mysql password)
user@linux:~/files/blog>

 

root@gemini:/var/www/kevwells.com# mysql -h localhost -u *** -p *** < /var/www/kevwells.com/*****.sql

Enter password:
root@gemini:/var/www/kevwells.com#

 

 

NOTE! Usernames, dbname and passwords have been removed from the above for security reasons

 

 

To delete a user use:

 

DROP USER ‘exampleuser1’@localhost;

 

 

then check the user is no longer present:

 

MariaDB [(none)]> SELECT User FROM mysql.user;

 

 

The syntax for changing a password using the SET PASSWORD statement in MariaDB is:

 

SET PASSWORD [ FOR user_name ] =
{
PASSWORD(‘plaintext_password1’)
| OLD_PASSWORD(‘plaintext_password2’)
| ‘encrypted_password’
};

 

 

For example, if you had an entry with User and Host column values of ‘bob’ and ‘%.loc.gov’, you would write the statement like this:

 

SET PASSWORD FOR ‘bob’@’%.loc.gov’ = PASSWORD(‘newpass’);

 

If you want to delete a password for a user, you would do:

 

SET PASSWORD FOR ‘bob’@localhost = PASSWORD(“”);

 

I was then able to access phpmyadmin, so one step forward on the way to solving the problem.

 

In phpmyadmin update the site url if necessary with:

 

UPDATE wp_options SET option_value=’YOUR_SITE_URL’ WHERE option_name=’siteurl’

 

UPDATE wp_options SET option_value=’http://3.222.27.169′ WHERE option_name=’siteurl’

 

 

Show query box
1 row affected. (Query took 0.0024 seconds.)
UPDATE wp_options SET option_value=’http://3.222.27.169′ WHERE option_name=’siteurl’

 

 

1 row affected.. yes, but it also says

 

apache2, mysql and sshd are all running.

 

check version of mysql with

 

root@gemini:/var/www/kevwells.com# mysql –version
mysql Ver 15.1 Distrib 10.3.32-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
root@gemini:/var/www/kevwells.com#

 

 

Set the MySQL user password
Type the following commands if you have MySQL 5.7.6 and later or MariaDB 10.1.20 and later:

 

ALTER USER ‘user-name’@’localhost’ IDENTIFIED BY ‘NEW_USER_PASSWORD’;
FLUSH PRIVILEGES;

 

You MUST do a FLUSH PRIVILEGES;

 

 

MariaDB []> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.001 sec)

MariaDB [mysql]>

 

that is now working ok!

 

but the website still can’t connect…

 

 

 

Note the command and its arguments are of the form:

 

mysql -h locahost -u username -p database

 

 

The arguments are

 

-h followed by the server host name (localhost)
-u followed by the account user name (use your MySQL username)
-p which tells mysql to prompt for a password
database the name of the database (use your database name).
Once mysql is running, you can type SQL statements and see the output in the terminal window.

 

NOTE -p is NOT PASSWORD!!!

 

 

so lets try:

 

mysql -h locahost -u username -p database

 

 

 

It works, we can connect to the database…

 

 

but the website still does not connect!

 

 

To reset a user’s MySQL/MariaDB password, log in to the command-line client as root with the command:

 

mysql -u root -p

 

Next, reset the user’s password with the command:

 

update mysql.user set password = MD5(‘(new_password)’) where user = “jdoe”;

 

Replace new_password with the new password, and jdoe with the username.

 

 

 

Next, show the user’s grants with the command:

 

show grants for ‘jdoe’@’localhost’;

 

Replace jdoe with the username. If applicable, change localhost to the host name.

 

You should get a list of the privileges that user has for the relevant database. It should look something like this:

 

MariaDB [(none)]> show grants for ‘***’@’localhost’;
+——————————————————————————————————————————-+
| Grants for ***@localhost |
+——————————————————————————————————————————-+
| GRANT ALL PRIVILEGES ON *.* TO `****`@`localhost` IDENTIFIED BY PASSWORD ‘********** ‘ |
| GRANT ALL PRIVILEGES ON `<database>`.* TO `****`@`localhost` |
| GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER ON `<database`.* TO `***`@`localhost` |
+——————————————————————————————————————————-+
3 rows in set (0.000 sec)

MariaDB [(none)]>

 

 

NOTE: Sensitive data removed from above for security reasons!

 

to create a new user:

 

Log into MYSQL as root

 

mysql -u root
Grant privileges. For a new user, execute:

CREATE USER ‘newuser’@’localhost’ IDENTIFIED BY ‘password’;
GRANT ALL PRIVILEGES ON *.* TO ‘newuser’@’localhost’;
FLUSH PRIVILEGES;

 

I created a new php user for admin  as the original user could no longer login all of a sudden!

 

 

can now login once again to phpmyadmin

 

 

Check the MySQL hostname

 

If the script is hosted on the same server as the MySQL database (e.g. your Linux Cloud Server), the hostname is localhost by default.

 

To double-check the MySQL hostname, log into the command-line client as root with the command:

 

mysql -u root -p

 

Next, show the hostname with the command:

 

SHOW VARIABLES WHERE Variable_name = ‘hostname’;

 

 

MariaDB [(none)]> SHOW VARIABLES WHERE Variable_name = ‘hostname’;

+—————+——–+
| Variable_name | Value |
+—————+——–+
| hostname | gemini |
+—————+——–+
1 row in set (0.001 sec)

MariaDB [(none)]>

 

 

On Debian and Ubuntu servers the configuration file for MySQL is usually saved at /etc/mysql/. It’s also possible to have user-specific settings stored at /home/<user>/.my.cnf, which would override the global configurations. Check if any user level overrides have been set.

 

It is commonly advised to have separate usernames for different web applications, so check at least those relevant to your page loading issues. You can open the global configuration file with first of the following two commands below, and the user-specific with the latter by replacing the <user> with a database username.

 

sudo nano /etc/mysql/my.cnf

 

sudo nano /home/<user>/.my.cnf

 

By scrolling down past [client] and [mysqld_safe] settings you’ll find something like the example here.

 

 

[mysqld]
#
# * Basic Settings
#
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address = 127.0.0.1

 

 

The lines here to pay close attention to are ‘socket’, ‘datadir’ and ‘bind-address’. The parameters in the example above are in their default values, and in most cases, your configuration would look the same. Make sure the settings point to the correct directories so that MySQL can actually find the required files.

 

The easiest way to check the ‘datadir’ is to use this command below

 

sudo ls -l /var/lib/mysql/

 

The output will list all files in that directory, it should contain at least the following plus any databases you have created.

 

drwx—— 2 mysql root 4096 Aug 5 12:23 mysql
drwx—— 2 mysql mysql 4096 Aug 5 12:29 performance_schema

 

If the data directory or socket has been moved and MySQL doesn’t know where they are, fix the configuration file to point to the correct directories. You can search for the folders with the following command.

 

sudo find / -name performance_schema && sudo find / -name mysql.sock

 

The third parameter you’ll need to check is the bind-address, this is only really relevant if your database needs to be accessed remotely. In Debian and Ubuntu installations the bind is by default set to the loopback address, which prevents database calls from outside the localhost. CentOS doesn’t have the same parameter unless manually set. For any setup where your web service is on a different server to the database, this bind-address should be set to the server’s own private IP.

 

 

 

 

 

Step 2 — Importing a MySQL or MariaDB Database

 

To import an existing dump file into MySQL or MariaDB, you will have to create a new database. This database will hold the imported data.

 

First, log in to MySQL as root or another user with sufficient privileges to create new databases:

 

mysql -u root -p

This command will bring you into the MySQL shell prompt. Next, create a new database with the following command. In this example, the new database is called new_database:

 

CREATE DATABASE new_database;

You’ll see this output confirming the database creation.

 

Output
Query OK, 1 row affected (0.00 sec)

 

Then exit the MySQL shell by pressing CTRL+D. From the normal command line, you can import the dump file with the following command:

 

mysql -u username -p new_database < data-dump.sql

username is the username you can log in to the database with
newdatabase is the name of the freshly created database
data-dump.sql is the data dump file to be imported, located in the current directory

 

If the command runs successfully, it won’t produce any output. If any errors occur during the process, mysql will print them to the terminal instead. To check if the import was successful, log in to the MySQL shell and inspect the data. Selecting the new database with USE new_database and then use SHOW TABLES; or a similar command to look at some of the data.

 

 

The Actual Cause of the Problem In This Case – WordPress Plugins

 

 

I finally ascertained that the cause of the problem was WordPress plugins. Presumably one or more WordPress plugins have been automatically updated and have suddenly resulted in this bug being released.

 

 

 

The Solution 

 

The solution was to move all plugins to a new directory: plugins.disabled.

 

Then move each plugin one by one back to the original plugin directory, for each  plugin, login to the wp-admin and activate one by one, test the website content functionality.

 

Continue in this way, moving plugins back one by one, repeating the procedure for each in turn.

 

Make sure when you test the pages content to call up different pages each time – else you could be receiving a “false positive” from cached website content from caches in the web-browser or internet provider!

 

In addition, I made sure to disable the automatic WordPress Plugin update option. As so often happens with these kind of “smart functionality” bright ideas for auto-updating dreamt up by programmers, it ends up in practice causing more problems than the one it was originally intended to solve. 

 

So, the website is now connecting to the database successfully and is once again able to display the content without errors. Let’s hope we can keep it that way.

 

Until the next software update.

 

Continue Reading

How To Install Mysql/Mariadb

I refer here to mysql and mariadb interchangeably.

 

Mariadb is a fork of a later version of mysql and is largely compatible with mysql. To all intents and purposes for this installation and configuration instance they are the same.

 

 

apt install mysql-server

 

during the installation you’ll be requested to set a mysql server admin user password.

 

If the secure installation utility does not run automatically during the installation process, then you can run it explicitly:

 

mysql_secure_installation utility

 

This prompts you to set the mysql root password and some other security options, such as disabling remote access for the root user.

 

Next, install the mysql php module:

 

apt-get install php-mysql

 

then start the server:

 

systemctl enable mysql
systemctl start mysql

 

Check that mysql is running with:

 

root@gemini:/etc/apache2/sites-enabled# systemctl status mysql
● mariadb.service – MariaDB 10.3.22 database server
Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2020-06-28 18:19:05 BST; 24h ago
Docs: man:mysqld(8)
https://mariadb.com/kb/en/library/systemd/
Process: 586 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS)
Process: 619 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 631 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR>
Process: 738 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 740 ExecStartPost=/etc/mysql/debian-start (code=exited, status=0/SUCCESS)
Main PID: 694 (mysqld)
Status: “Taking your SQL requests now…”
Tasks: 30 (limit: 2282)
Memory: 382.1M
CGroup: /system.slice/mariadb.service
└─694 /usr/sbin/mysqld

 

You can also test the system with:

 

mysqladmin -p -u root version

 

This will return the mysql or mariadb server version information.

 

root@gemini:~#mysqladmin -p -u root version
Enter password:

 

mysqladmin Ver 9.1 Distrib 10.3.22-MariaDB, for debian-linux-gnu on x86_64
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

 

Server version 10.3.22-MariaDB-1ubuntu1
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/run/mysqld/mysqld.sock
Uptime: 1 day 15 hours 16 min 7 sec

 

Threads: 10 Questions: 1333249 Slow queries: 0 Opens: 127 Flush tables: 1 Open tables: 120 Queries per second avg: 9.431
root@gemini:~#

 

 

Next, you can create an admin user and an initial database:

 

If you have problems with the root password, you can reset it using the mysql command line tool:

 

mysql -u root -p

 

you can display the users which are defined with:

 

SELECT User, Host, authentication_string FROM mysql.user;

 

To change the root password enter:

 

UPDATE mysql.user SET authentication_string = PASSWORD(‘password’) WHERE User = ‘root’;

 

and then:

 

FLUSH PRIVILEGES:

 

exit

Again in the mysql command line tool, you can create a database with:

 

CREATE DATABASE demodb;

display all existing databases with:

 

SHOW DATABASES;

 

 

To work with a database:

 

USE <databasename>;

 

and delete permanently a database with:

 

DROP DATABASE <databasename>;

 

NOTE: you cannot retrieve a database once you have deleted it with DROP!

 

With that the basic installation of mysql server is complete.

 

Continue Reading