Thursday, November 09, 2006

SQL 2005 SP2 CTP Released

The CTP for the SQL 2005 SP2 has been released. Hope are like this SP2 will be available to public during the first quarter of 2007. Check out the Microsoft site for more details on the CTP.

Sunday, October 22, 2006

Back to the days of CUI

We are managing the users using Active directory groups for easy managability. Couple of days back one of my developer came to me complaining(come on give me a break the normal developer stuff) saying "oh man you revoked my access, I cant edit a stored procedure, I cant modify a table definition, what is going on?" I said "Dude I got tons of other stuff waiting on my mail box and you didnt mess me up too :)) why shd I revoke your access?"

He came back and with this error "Property DefaultSchema is not available for Database . This property may not exist for this object, or may not be retrievable due to insufficient access rights. (SQLEditors)"

oh gash what a hell this is? I started verifying his access rights every thing is perfect. Googlified and found out the following link. In this one of the microsoft accepts the bug in the SQL Server Management Studio, in the following conditions.

If you have users who are having logins based upon an Active Directory group.i.e they dont have access to the database directly rather, they will be part of an active directory group and that group will be having access to the database. In this case you cant specify a default schema to that group. In this case they cant use the management studio to create a table or a stored procedure.

So the solution would be use the t-sql. wow what a solution? Welcome back to the CUI world(Character user Interface).

Sunday, September 24, 2006

Good Article on database files and file groups

There are lots of articles, which gives tons of suggestion on designing the placement of the database and log files. In this article the author describes the advantages of placing the file groups separately and also describes the procedure to do that. In most of the articles the authors either do explain the advantages or just explain how to create a new file group, In this the author explains the both, in both SQL 2000 and 2005.

Wednesday, September 06, 2006

SQL 2005 Transaction Log Issues

It was another beautiful day until I reached my workplace and opened my lotus mail client. Tons of transaction log backup failures in the newly installed 2005 dev boxes and as usual started feeling feverish seeing the weird errors from the error logs. Let you too have the blessings of reading these weird errors.

Msg 4214, Level 16, State 1, Line 1

BACKUP LOG cannot be performed because there is no current database backup.

Msg 3013, Level 16, State 1, Line 1

BACKUP LOG is terminating abnormally.


I was wondering did I try some thing new hmmm yeah gotcha. I am a bit lazy goose and was lazy to design the tlog backup (My manager doesn’t read blogs!!!!!!!) and so I designed another job which will be truncating the tlog periodically. I started shouting "sulekha sulekha" oops she was my ex colleague what I am supposed to shout oh its Eureka! Eureka! which didn’t last longer :(. I went and disabled that log truncation job and now ran the backup job. The error came again. Okey now it’s the time to go to Dr.Google. Dr.Google was like oh shit this stupid is back to me again oh plsssssss No. But there was no other go for him had to suffer.

I was googlifying the error and wow the Dr.Google rocked again he gave me the prescription. One Mr. Steve Schumidt from the SQL Development team has admitted that this is a bug(Not directly but Indirectly.). He says that they have deprecated the functionality "BACKUP LOG WITH TRUNCATE_ONLY" or truncating the transaction log feature in 2005(hmm Is It? It was not how I Instructed them to build the product :)) thats interesting). The correct way of doing this is to switch the database to the simple mode.

Seems like this statement will break the transaction log chain(Seems it was doing this in the 2000 too) and they wanted to notify this to the 2005 users, that we have broke the transaction log chain and we need to have a full backup before continuing to have tlog backups(Dude! There are very user friendly messages to say this error and obviously you choose the short and sweet way such that the end user doesnt understand the error message.). I thought okey thanks Steve lemme take the full backup and then continue taking the tlog backups and it obviously worked(after all who searched for the solution.).

Wednesday, August 23, 2006

SQL 2005 agent xps disabled

I have a sql server 2005 clustered named instance. I moved it to the passive node one good day(probably bad day for me :(() and handed over the node to the windows folks for their further blessings. Things were fine as long as the windows support started doing their patching and other firmware updates.

Suddenly my agent said oh buddy im bored and tired running and serving you damn guys, Let me take rest for a while and changed its status to "Agent xps disabled". I started come on dude you got jobs to execute and things to monitor cheer up. our dude was very tired to reply me :((.

I went all the way to the services to see whether the agent services are running. Services said he is doing good with me dude. Some one said go to the surface area configuration ran back to SAC hoping to stop my run there.

oh gosh it didnt stop there SAC said its sad to say butt sorry dude agent is running with me fine. I said nah I dont believe in you lemme restart a couple of times. phewww nothing worked. I started running towards cluster admin nope the agent became a night mare.

oh come on I want to be good guy, hmm ur so bad to hear me lemme take you to our google doctor. wow our doctor gave me tons of prescriptions hmm oh god became another reading, searching and sites says dude you got to pay me to read more. hmm atlast my run seems to stop with the code snipped below from our great SQL Creator none other than Microsoft

use master
go
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'Agent XPs', 1;
go
RECONFIGURE
GO
EXECUTE sp_configure 'show advanced options', 0
Go
RECONFIGURE WITH OVERRIDE
GO

Hurreh!!!!!!!!!!!! the agent is back online now. Till now I had no idea like why all of sudden it went offline. Since it used to be online in the other node for months. In the node2 I am not sure. Till checking with Dr.Google for his expert prescription. lemme see.

Saturday, August 12, 2006

Table Partitioning

Table portioning can be coined as distributing a table and its associated indexes in different file groups. Table partitions will be quite handy if you have a large OLTP database where data keeps comes in and goes out.

The important thing before implementing the table partition is planning and thinking. We got to plan well before implementing and designing partition. I assure your life is going to become hell if you try to modify the partition after implementing it in your production. Even though the steps seems like its a cake walk its not. So think 100 times about all the possibilities before implementing this and plan a head on this.

The first step in implementing the table partitions is to decide the number of partitions and how we are going to spread the partitions across the file groups. In this particular scenario since im having 24 partitions we decided to have 24 file groups spread across the 8 disks. There is no restriction like we have to have only one partition in one file group we can have more than one partition in one file groups also but for the performance reasons we decided to have 24 file groups. Lets dig more into the partition stuff.

Table partition consists of three things

1. Partition Function.

2. Partition Schema

3. The real partitioned table.

1. Partition Function.

Partition function is just a function which determines which data should go to which partition. Hope you should have confused reading this (if some one does). Let us consider the below scenario we have a crazy requirement (requirements are always crazy that’s another issue).

The requirement is like we have a system that will keep on inserting its log data in the sqlserver. The data in that table becomes worth less after a day. A table should be designed that will hold 24 hours worth of data, the data that is older than 23 hours should be deleted. There will be a couple of applications that will access the data that is older than an hour to collect statistics.

We decided to implement the table partition the reason is there will be several application servers that will be keep on inserting data and there will be applications that will be selecting older data. If all these things are kept in a normal table then it will end up in blocking, locking etc. So we decided to implement table partitions.

There will be 24 partitions representing one an hour. The data inserted say at 12 pm in the afternoon should go to the partition number 12; the data inserted at 1 pm should go to the partition number 13 and so on. The table structure is given below:

svrID int
PartitionNo int -- Partition number
svrDate datetime
svrMsg varchar(500)
logOrigin varchar(100)
userId varchar(20)

Ogay now we got our table designed the next thing is the partition function,

Syntax:

CREATE PARTITION FUNCTION partition_function_name ( input_parameter_type )
AS RANGE [ LEFT | RIGHT ]
FOR VALUES ( [ boundary_value [ ,...n ] ] )
[ ; ]

Example:

CREATE PARTITION FUNCTION [HourRangePF1](int) AS RANGE LEFT FOR VALUES (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23)

The above is the simplest of the simplest partition function. Guess who is the author its meeeeeeeeeee(who else can write these kinds of stupid codes, this is too much of show off isnt?). okey it says insert all the data which is less than 1 in the partition one and so on.

Next we got to design the partition scheme

Syntax:

CREATE PARTITION SCHEME partition_scheme_name
AS PARTITION partition_function_name
[ ALL ] TO ( { file_group_name | [ PRIMARY ] } [ ,...n ] )
[ ; ]

Example:

CREATE PARTITION SCHEME [HoursPS1] AS PARTITION [HourRangePF1] TO ([SECONDARY01], [SECONDARY02], [SECONDARY03], [SECONDARY04], [SECONDARY05], [SECONDARY06], [SECONDARY07], [SECONDARY08], [SECONDARY09], [SECONDARY10], [SECONDARY11], [SECONDARY12], [SECONDARY13], [SECONDARY14], [SECONDARY15], [SECONDARY16], [SECONDARY17], [SECONDARY18], [SECONDARY19], [SECONDARY20], [SECONDARY21], [SECONDARY22], [SECONDARY23], [SECONDARY24])

The above partition scheme gives the skeleton for the table. Basically it defines which partition should go to which file group using the partition function.

CREATE TABLE [dbo].[srvLogs](
svrID int Not Null,
svrDate datetime Null,
svrMsg varchar(500) Null,
logOrigin varchar(100) Null,
userId varchar(20) Null,
PartitionNo int NotNull,
CONSTRAINT [PK_ srvLogs_1] PRIMARY KEY NONCLUSTERED
(
svrId ASC
PartitionNo ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [HoursPS1]([PartitionNo])
) ON [HoursPS1]([ PartitionNo])

The table is ready now. It is created across 24 file groups. It is upto you to place the file groups in as many disks you want. Here we go we have created a partitioned table. The table is partitioned based upon the column partition Number.
So if the partition number is equal to 1 then it will go to the first partition, else if it is 2 then it will go to the partition 2 and so on. This part of segregating the data is taken care by the partition function. There are still more things to come on this partition table stuff let me continue it in couple of other posts.

Wednesday, August 09, 2006

Lets get started

I had a quench of starting a technical blog but could not due to my known factor lazyness. Thanks for AnjanaRam for making me to intiate a blog. Had a couple of thoughts like starting a SQL blog in tamil and try to tamilize SQL Server or starting a blog on Microsoft technologies latest stuff or updates on SQL Server. I donno what I am going to write in this blog. But hopefully it would be related to SQL Server that is what the name suggests. But I am pretty much sure i am not going to Spy SQL Server or write a new version SPYQLServer. Let us see. Let me start the table partition stuff thats what I recently Implemented.