I was working with a new computer system. I had made a backup of the database. But could I rely on it?
To confirm the first run of a database backup I wanted to test/verify it was OK.
What I didn’t want to do was to restore the database over the existing one, without the assurance that I wouldn’t be losing data. Too risky!
From within the SQL Server Management Studio. Click to run New Query. I ran the following:
RESTORE VERIFYONLY FROM DISK = 'restore verifyonly from disk = 'Z:\Backup\project1.bak'
You may have to wait a while for the verification process to run.
Here’s an example output showing that the database backup is good to be used:
The backup set on file 1 is valid.
A simple command to run, with a lot of reassurance as a result.
When adding an entry to a database it would be nice to combine the addition with an update.
For example adding a new entry if it already exists then that entry can be updated.
This means putting more of the sequence within the database stored procedure. But it will save on a database query, with possible action – the addition, and a result confirming that the addition occurred, or a value to denote that it already exists.
If the entry already exists then there is a further call to the database to update that entry.
Dependant upon the circumstance it may be better to combine this sequence all within a single stored procedure call to the database.
There is the option to check if a particular row value exists. If it does then update or add a new entry.
Here’s the initial setup of the stored procedure, with the passed in parameters:
CREATE PROCEDURE AddUpdateProduct @ProductId INT @Name NVARCHAR(150) @Details NVARCHAR(max)
Lets begin with a classic update
UPDATE Product SET Name = @Name, Details = @Details WHERE ProductId = @ProductId
Similarly an insert would look like:
INSERT INTO Product ( Name, Details ) VALUES ( @Name, @Details ) SELECT SCOPE_IDENTITY()
I’ve added SELECT SCOPE_IDENTITY(), to retrieve the ID of the added entry.
In our example we wish to combine the two together. We’ll start by trying to update the entry.
A test on the rowcount will tell us whether we were successful. If no rows were affected then a new row is added.
Here’s the example with the combined code:
CREATE PROCEDURE AddUpdateProduct @ProductId INT @Name NVARCHAR(150) @Details NVARCHAR(max) UPDATE Product SET Name = @Name, Details = @Details WHERE ProductId=@ProductId IF @@ROWCOUNT =0 BEGIN INSERT INTO Product ( Name, Details ) VALUES ( @Name, @Details ) SELECT SCOPE_IDENTITY() END
A short piece of SQL to update an existing row in a database table, or if the row doesn’t exist then to add a new entry.
I like this idea. By putting the whole action in the SQL we are saving going backwards and forwards between the database server and our website software.
The intent was to join the address fields as a single returned value from a database query select. But I found that if one of these fields was null, the whole returned result was empty.
I had an address, in the usual way, across a number of database fields: address; town ; postcode and county.
I wished o let the database query do the work, joining the fields, as opposed to the VB/C# code.
The SQL query to join these fields as a single composite field value.
Shown below is my first version of the composite field value, derived from the individual address parameters.
‘Street: ‘ + building.address + ‘, Town: ‘ + building.town + ‘, Postcode: ‘ + building.postcode + ” AS Location
My original expectation was that if one of these address fields was null then that part would be shown as blank.
For example with no town the returned entry might be:
Street: Broad Street, Town: , Postcode: Rg1 1AA
I found that whilst I could readily create a string composed from the individual fields on occasion the result was empty, even though I knew at least one field to have a valid value.
Investigating I showed that if one of the fields was empty (null) then the whole combined field had a null value.
The building location was to show the address, as street, town and postcode.
‘Street: ‘ + building.address + ‘, Town: ‘ + building.town + ‘, Postcode: ‘ + building.postcode + ” AS Location
As can be seen its a simple string addition of the individual fields.
However, if there was no entry in one of the fields, ie. it was null, then the whole result returned as an empty field, as opposed to the single empty entry.
To correct the empty result error I added an isnull test for each field, taking either the database value or an alternative presentational value, In this instance a couple of dashes to indicate that there is no entry.
‘Street: ‘ + ISNULL(building.address,’–‘) + ‘Town: ‘ + ISNULL(building.town,’–‘) + ‘Postcode: ‘ + ISNULL(building.postcode,’–‘) + ” AS Location
Shown above is the previous example with the addition of the isnull.
For our earlier example this gives:
Street: Broad Street, Town: –, Postcode: Rg1 1AA
Inserting values into a database table I wished to add a date and time.
For this I used a convert
CONVERT(datetime, ’26/05/2014 14:21:00′)
However this will fail where the day value is too high. A value greater than 12 will be taken as an invalid month. Also the day and month will be saved swapped.
The date format should be defined as UK based, d/m/y
For this the country format value is included within the conversion.
CONVERT(datetime, ’26/05/2014 14:21:00′, 103)
In this example using 103 to handle UK date format
Note: also ensure there are no spaces at the start or end of the enclosed string.
How to view the foreign key settings within a database administered using MyLittleAdmin?
Looking to find information about foreign keys between tables, via the control panel for a website I was using MyLittleAdmin to manage the associated database.
I was able to view the keys associated with a particular table but unlike the SQL Management Studio this didn’t allow for key review.
I looked to using t-sql and found this
SELECT fk.name AS ForeignKey, OBJECT_NAME(fk.parent_object_id) AS FkTable, COL_NAME(fkc.parent_object_id, fkc.parent_column_id) AS FkColumn, OBJECT_NAME(fk.referenced_object_id) AS ReferencedTable, COL_NAME(fkc.referenced_object_id, fkc.referenced_column_id) AS ReferencedColumn, delete_referential_action_desc AS OnDelete, update_referential_action_desc AS OnUpdate FROM sys.foreign_keys AS fk INNER JOIN sys.foreign_key_columns AS fkc ON fk.object_id = fkc.constraint_object_id WHERE fk.parent_object_id = OBJECT_ID('OurStuff');
By changing the table reference I was able to get the list of keys and their settings.
Inserting table rows including the index idents will give an error similar to:
Cannot insert explicit value for identity column in table TableName when IDENTITY_INSERT is set to OFF.
I was copying the contents of a database table from one installation of DotNetNuke to another.
Wishing to maintain the table structure, parent and child references meant that to best recreate and copy the table I should maintain the existing index identify values.
The table used the table unique identity index to create a parent and child relationship between some of the rows.
To export the table I was able to select all of the table content and display it as a table.
The table of data could then be copied into an editor (Geany) to add the necessary insert SQL.
To properly maintain the table data and allow a true reproduction the interlinking between the rows must be maintained. To adhesive this the index ident must be imported.
A table will automatically assign the route ident when the route is inserted. If the field is included as part of the insert statement the insert will fail with an error.
To overcome the block on the insert into the ident field i turned off the restriction at the start of my SQL insert and restore it again afterwards.
For a table with an orienting set on one of the fields performing a simple insert will fail beaches the identity field doesn’t permit is valid to be set.
To overcome this I top and tailed the insert statements with identity on and off
SET IDENTITY_INSERT GalleryAlbum ON
Insert of table rows SQL
SET IDENTITY_INSERT GalleryAlbum OFF
To enable database tables to handle the extended character set languages I was looking to convert fields from type text to ntext.
Aware of the pending loss of support for the field types text and ntext I also chose to convert these fields to nvarchar(max).
SQL server 2016 is removing support for field type text and ntext.
In this example table I had a field called body which was of type text.
The field was to retain its name but to be changed to a type which would support a greater international language.
To prepare for SQL server 2016 I chose to also change it to type nvarchar(max).
Given below is the SQL applied to the table portfolio.
alter table dbo.Portfolio add body2 text go update dbo.Portfolio set body2 = body go alter table dbo.Portfolio drop column body go alter table dbo.Portfolio add body nvarchar(max) go update dbo.Portfolio set body = body2 go alter table dbo.Portfolio drop column body2 go
As can be seen above I begin by adding a second field of type text, called body2.
The field body is then copied to this be field body2.
The old field can now be dropped. You may prefer to check at this point that the data has been copied.
Now to create the new version of the body, giving a type of nvarchar(max).
Once more the data is copied. This time back to the body field.
Perhaps another check of the data copy?
And finally deletion of the temporary field.
Consider a database table of posts or pages.
The website address has changed, perhaps from the development URL to the live website
Or maybe the website’s URL is to change from example.co.uk to example.com.
The tables are to be updated, ideally searching and replacing the old value with a new one.
Given below is a search and replace for the table post, replacing the URL entries in the field pbody.
UPDATE post SET pbody = REPLACE(pbody,'example.co.uk','example.com') WHERE pbody LIKE '%example.co.uk%'
A search and replace of a MS SQL table replacing a website URL or correcting an error.
ExpressMaint is a utility which may be used to automate the backup of SQL Server databases.
It can be used to create and save a backup of each of the databases within an SQL Server. Better still it can be set to age the files, deleting all those older than, say, a month.
The ExpressMaint project home page is here: https://expressmaint.codeplex.com/
Download a copy of the zip file ExpressMaint.zip. Extract the ExpressMaint.exe file contained within to your scripts directory.
I’ll assume that the exe file has been added to the folder as: z:\ExpressMaint.exe.
I use a batch file to call the exe file, passing the relevant parameters.
And I have created a backup directory for the databases at z:\backup.
The batch file for running ExpressMaint is located at: z:\expmaint.cmd.
Contents of which are given below
"z:\expressmaint.exe" -S webserver\SQLserver -D ALL_USER -T DB -R "z:\dbBackup" -RU WEEKS -RV 4 -B "z:\dbBackup" -BU WEEKS -BV 4 -V -TO 20
In the above change the name of the server webserver and Sqlserver as appropriate.
shows example for individual database location
I created a scheduled task to run daily.
On Windows Server 2012 I found that expressmaint.exe wasn’t running as a scheduled task.
Searching for more information about the cause of the issue. I found this on stack exchange, which recommend using a later version of ExpressMaint:
I am actually using expressmaint with sql server 2012 express so u shouldn’t have any problems. make sure u use
https://expressmaint.codeplex.com/downloads/get/91612 which is version 188.8.131.52 and NOT 184.108.40.206
Stack Exchange article reference is:
Following the referenced link and using that version of ExpressMaint worked.
An enlarged DotNetNuke database can affect performance and also be an indication of a more serious issue. Listing table sizes can help to understand and resolve issues.
I have used this piece of SQL to check whether the site log or the event log is over sized on a website.
CREATE TABLE #temp ( table_name sysname , row_count INT, reserved_size VARCHAR(50), data_size VARCHAR(50), index_size VARCHAR(50), unused_size VARCHAR(50)) SET NOCOUNT ON INSERT #temp EXEC sp_msforeachtable 'sp_spaceused ''?''' SELECT a.table_name, a.row_count, COUNT(*) AS col_count, a.data_size FROM #temp a INNER JOIN information_schema.columns b ON a.table_name collate database_default = b.table_name collate database_default GROUP BY a.table_name, a.row_count, a.data_size ORDER BY CAST(REPLACE(a.data_size, ' KB', '') AS integer) DESC DROP TABLE #temp
The SQL may be executed from either within the DotNetNuke website or, where the website is failing, from the SQL Management Studio.
To run the SQL on the website, as a host user open the page Host > SQL.
Paste the above code into the box and click on Run Script at the bottom of the page.
The tables within the database will be listed together with their associated size, see example listing below:
As can be seen the tables from the database are listed showing their name; row count; column count and data size, listed with the data size descending.