When a previously created database table requires the addition of a default value. Can the amendment be made easily? And without recourse to the SQL Configuration Manager.
I had configured a database table. Subsequent testing showed that one of the fields required a default value.
I was testing for a value of less than 0, but there was an occasion where this field may not be given a value.
The use of a default value on the field would ensure a minimum base value for use in comparisons.
For ease I wished to implement the change through the website’s admin SQL page.
I was using DotNetNuke which supports this. Your website might support something similar, or perhaps write a one-time use piece of code, or action through another SQL interface.
ALTER TABLE fruit ADD default ((-1)) FOR qty
In the example above I am amending the table called fruit and assigning a default value -1 to the qty field.
For a parent-child database table relationship – could retrieving the nth child row be used?
I had a project comprising a main table with an associated secondary table.
Similar in style to a manufacturers parent table with associated products child table.
In this particular example I knew that there would only be two child entries associated with the parent table.
Interested in taking a different approach I wanted to retrieve a data row from the child table, knowing the parent table reference and the row number, the nth row from the database table.
In my project there would be control over the order of the entries in the child table. Derived from a series of forms the child entries would be in a defined order.
I chose to experiment with the idea that I could select the child entries by row number, rather than adding another controlling field.
Below is shown the outline of the stored procedure to get the data from the two tables.
CREATE PROCEDURE GetProduct @ManufacturerID [int], @ProductID [int] ASWITH tblProducts AS ( SELECT tblProducts.*, ROW_NUMBER() OVER (ORDER BY ProductID) AS 'RowNumber' FROM tblProducts WHERE ManufacturerID + @ManufacturerID ) SELECT * FROM tblProducts WHERE RowNumber = @RowNumber
My experiment worked and I was able to select the right entry from the secondary table.
I was concerned that with this implementation apart from the row number there was nothing to define what a row was.
Adding another field to define the child entry type was subsequently chosen as a better approach.
And adding this type of definition removed the whole point of retrieval by row number.
Considering things which could go wrong:
- What if the rows did get entered back to front?
- What if there was an error on save, maybe there were 3 rows?
As the project stood only the two child table rows would exist. But it’s not robust and designed for future development. Whilst it’s not possible to design for every eventuality potential hindrances should be avoided.
My conclusion, was that it is better to have an additional controlling field which can be used.
We can use SQL to replace the value in field with a new value.
There’s no need to search for those rows affecting, downloading each in turn, making the amendment and finally making the update.
For the update of the given field in the table we can use one SQL statement, changing all matching entries.
Here’s the general format of the search and replace:
UPDATE dbo.TableName SET Field = REPLACE(Field, CurrentString, NewString)
So considering a spelling error, or perhaps a change of product name, we wish to update the product details table:
UPDATE dbo.product SET details = REPLACE(details, 'appls', 'apples')
Note, the searched for string doesn’t have percentage characters. The search is a true search, not an SQL like.
An SQL like would be used as
SELECT * FROM details like ‘%apple%’
I was working with a new computer system. I had made a backup of the database. But could I rely on it?
To confirm the first run of a database backup I wanted to test/verify it was OK.
What I didn’t want to do was to restore the database over the existing one, without the assurance that I wouldn’t be losing data. Too risky!
From within the SQL Server Management Studio. Click to run New Query. I ran the following:
RESTORE VERIFYONLY FROM DISK = 'restore verifyonly from disk = 'Z:\Backup\project1.bak'
You may have to wait a while for the verification process to run.
Here’s an example output showing that the database backup is good to be used:
The backup set on file 1 is valid.
A simple command to run, with a lot of reassurance as a result.
When adding an entry to a database it would be nice to combine the addition with an update.
For example adding a new entry if it already exists then that entry can be updated.
This means putting more of the sequence within the database stored procedure. But it will save on a database query, with possible action – the addition, and a result confirming that the addition occurred, or a value to denote that it already exists.
If the entry already exists then there is a further call to the database to update that entry.
Dependant upon the circumstance it may be better to combine this sequence all within a single stored procedure call to the database.
There is the option to check if a particular row value exists. If it does then update or add a new entry.
Here’s the initial setup of the stored procedure, with the passed in parameters:
CREATE PROCEDURE AddUpdateProduct @ProductId INT @Name NVARCHAR(150) @Details NVARCHAR(max)
Lets begin with a classic update
UPDATE Product SET Name = @Name, Details = @Details WHERE ProductId = @ProductId
Similarly an insert would look like:
INSERT INTO Product ( Name, Details ) VALUES ( @Name, @Details ) SELECT SCOPE_IDENTITY()
I’ve added SELECT SCOPE_IDENTITY(), to retrieve the ID of the added entry.
In our example we wish to combine the two together. We’ll start by trying to update the entry.
A test on the rowcount will tell us whether we were successful. If no rows were affected then a new row is added.
Here’s the example with the combined code:
CREATE PROCEDURE AddUpdateProduct @ProductId INT @Name NVARCHAR(150) @Details NVARCHAR(max) UPDATE Product SET Name = @Name, Details = @Details WHERE ProductId=@ProductId IF @@ROWCOUNT =0 BEGIN INSERT INTO Product ( Name, Details ) VALUES ( @Name, @Details ) SELECT SCOPE_IDENTITY() END
A short piece of SQL to update an existing row in a database table, or if the row doesn’t exist then to add a new entry.
I like this idea. By putting the whole action in the SQL we are saving going backwards and forwards between the database server and our website software.
The intent was to join the address fields as a single returned value from a database query select. But I found that if one of these fields was null, the whole returned result was empty.
I had an address, in the usual way, across a number of database fields: address; town ; postcode and county.
I wished o let the database query do the work, joining the fields, as opposed to the VB/C# code.
The SQL query to join these fields as a single composite field value.
Shown below is my first version of the composite field value, derived from the individual address parameters.
‘Street: ‘ + building.address + ‘, Town: ‘ + building.town + ‘, Postcode: ‘ + building.postcode + ” AS Location
My original expectation was that if one of these address fields was null then that part would be shown as blank.
For example with no town the returned entry might be:
Street: Broad Street, Town: , Postcode: Rg1 1AA
I found that whilst I could readily create a string composed from the individual fields on occasion the result was empty, even though I knew at least one field to have a valid value.
Investigating I showed that if one of the fields was empty (null) then the whole combined field had a null value.
The building location was to show the address, as street, town and postcode.
‘Street: ‘ + building.address + ‘, Town: ‘ + building.town + ‘, Postcode: ‘ + building.postcode + ” AS Location
As can be seen its a simple string addition of the individual fields.
However, if there was no entry in one of the fields, ie. it was null, then the whole result returned as an empty field, as opposed to the single empty entry.
To correct the empty result error I added an isnull test for each field, taking either the database value or an alternative presentational value, In this instance a couple of dashes to indicate that there is no entry.
‘Street: ‘ + ISNULL(building.address,’–‘) + ‘Town: ‘ + ISNULL(building.town,’–‘) + ‘Postcode: ‘ + ISNULL(building.postcode,’–‘) + ” AS Location
Shown above is the previous example with the addition of the isnull.
For our earlier example this gives:
Street: Broad Street, Town: –, Postcode: Rg1 1AA
Inserting values into a database table I wished to add a date and time.
For this I used a convert
CONVERT(datetime, ’26/05/2014 14:21:00′)
However this will fail where the day value is too high. A value greater than 12 will be taken as an invalid month. Also the day and month will be saved swapped.
The date format should be defined as UK based, d/m/y
For this the country format value is included within the conversion.
CONVERT(datetime, ’26/05/2014 14:21:00′, 103)
In this example using 103 to handle UK date format
Note: also ensure there are no spaces at the start or end of the enclosed string.
How to view the foreign key settings within a database administered using MyLittleAdmin?
Looking to find information about foreign keys between tables, via the control panel for a website I was using MyLittleAdmin to manage the associated database.
I was able to view the keys associated with a particular table but unlike the SQL Management Studio this didn’t allow for key review.
I looked to using t-sql and found this
SELECT fk.name AS ForeignKey, OBJECT_NAME(fk.parent_object_id) AS FkTable, COL_NAME(fkc.parent_object_id, fkc.parent_column_id) AS FkColumn, OBJECT_NAME(fk.referenced_object_id) AS ReferencedTable, COL_NAME(fkc.referenced_object_id, fkc.referenced_column_id) AS ReferencedColumn, delete_referential_action_desc AS OnDelete, update_referential_action_desc AS OnUpdate FROM sys.foreign_keys AS fk INNER JOIN sys.foreign_key_columns AS fkc ON fk.object_id = fkc.constraint_object_id WHERE fk.parent_object_id = OBJECT_ID('OurStuff');
By changing the table reference I was able to get the list of keys and their settings.
Inserting table rows including the index idents will give an error similar to:
Cannot insert explicit value for identity column in table TableName when IDENTITY_INSERT is set to OFF.
I was copying the contents of a database table from one installation of DotNetNuke to another.
Wishing to maintain the table structure, parent and child references meant that to best recreate and copy the table I should maintain the existing index identify values.
The table used the table unique identity index to create a parent and child relationship between some of the rows.
To export the table I was able to select all of the table content and display it as a table.
The table of data could then be copied into an editor (Geany) to add the necessary insert SQL.
To properly maintain the table data and allow a true reproduction the interlinking between the rows must be maintained. To adhesive this the index ident must be imported.
A table will automatically assign the route ident when the route is inserted. If the field is included as part of the insert statement the insert will fail with an error.
To overcome the block on the insert into the ident field i turned off the restriction at the start of my SQL insert and restore it again afterwards.
For a table with an orienting set on one of the fields performing a simple insert will fail beaches the identity field doesn’t permit is valid to be set.
To overcome this I top and tailed the insert statements with identity on and off
SET IDENTITY_INSERT GalleryAlbum ON
Insert of table rows SQL
SET IDENTITY_INSERT GalleryAlbum OFF
To enable database tables to handle the extended character set languages I was looking to convert fields from type text to ntext.
Aware of the pending loss of support for the field types text and ntext I also chose to convert these fields to nvarchar(max).
SQL server 2016 is removing support for field type text and ntext.
In this example table I had a field called body which was of type text.
The field was to retain its name but to be changed to a type which would support a greater international language.
To prepare for SQL server 2016 I chose to also change it to type nvarchar(max).
Given below is the SQL applied to the table portfolio.
alter table dbo.Portfolio add body2 text go update dbo.Portfolio set body2 = body go alter table dbo.Portfolio drop column body go alter table dbo.Portfolio add body nvarchar(max) go update dbo.Portfolio set body = body2 go alter table dbo.Portfolio drop column body2 go
As can be seen above I begin by adding a second field of type text, called body2.
The field body is then copied to this be field body2.
The old field can now be dropped. You may prefer to check at this point that the data has been copied.
Now to create the new version of the body, giving a type of nvarchar(max).
Once more the data is copied. This time back to the body field.
Perhaps another check of the data copy?
And finally deletion of the temporary field.