Jump to content

Ms Sql Server Technology Discussions


9Pardhu

Recommended Posts

[quote name='mtkr' timestamp='1359758659' post='1303207754']
deals2 bayyaa..... paina unna sample data n queries try chei...
vaatitho ne xpected results e vastunnaiii... but nuvvemoo where lo max teesukovatle antunnavvv!!!
[/quote]
Monday office ki poyinaka try chesta bhayya

Link to comment
Share on other sites

[b] Difference Between NOLOCK and NOWAIT Hints[/b]

[i][b]NOWAIT will return error if the original table has (transaction) locked on it.[/b][/i]

[i][b]NOLOCK will read the data irrespective of the (transaction) lock on it.[/b][/i]

In either case there can be incorrect data or unexpected result. There is no guarantee to get the appropriate data.

Here is the example of the how both provide different result on similar situation.

In this sample scenario we will create a table with a single row. We will open a transaction and delete a single row. We will now close the transaction and read the query with once with NOLOCK hint and once with NOWAIT hint. After the test we will rollback original transaction, which was deleting the record. As we will rollback transaction there will be no change in resultset however, we will get very different results in the case of NOLOCK and will get error in case of NOWAIT.

First Let us create a table:

[CODE]
USE tempdb
GO
CREATE TABLE First (ID INT, Col1 VARCHAR(10))
GO
INSERT INTO First (ID, Col1)
VALUES (1, 'First')
GO
[/CODE]

[img]http://www.pinaldave.com/bimg/nolocknowait1.jpg[/img]

Now let us open three different connections.

Run following command in the First Connection:

[CODE]
BEGIN TRAN
DELETE FROM First
WHERE ID = 1
[/CODE]


[img]http://www.pinaldave.com/bimg/nolocknowait2.jpg[/img]

Run following command in the Second Connection:

[CODE]
SELECT ID, Col1
FROM First WITH (NOWAIT)
WHERE ID = 1
[/CODE]

[img]http://www.pinaldave.com/bimg/nolocknowait4.jpg[/img]

Run following command in the Third Connection:

[CODE]
SELECT ID, Col1
FROM First WITH (NOLOCK)
WHERE ID = 1
[/CODE]

[img]http://www.pinaldave.com/bimg/nolocknowait3.jpg[/img]

You can notice that result is as discussed earlier. There is no guarantee to get 100% correct result in either case. In the case of NOLOCK and will get error in case of NOWAIT. If you want to get the committed appropriate result, you should wait till the transaction lock on the original table is released and read the data.

Link to comment
Share on other sites

[b] Query Analyzer Short Cut to display the text of Stored Procedure[/b]

This is quick but interesting trick to display the text of Stored Procedure in the result window. Open SQL Query Analyzer >> Tools >> Customize >> Custom Tab

type sp_helptext against Ctrl+3 (or shortcut key of your choice)

Press CTRL + T to enable the text view in result window.

Type any Stored Procedure name in your database, select the SP name and type Ctrl+4.

You will see the Stored Procedure text in the Result Window.

Link to comment
Share on other sites

[b] ReIndexing Database Tables and Update Statistics on Tables[/b]


[i][b]SQL SERVER 2005 uses ALTER INDEX syntax to reindex database. SQL SERVER 2005 supports DBREINDEX but it will be deprecated in future versions.[/b][/i]

When any data modification operations (INSERT, UPDATE, or DELETE statements) table fragmentation can occur. DBCC DBREINDEX statement can be used to rebuild all the indexes on all the tables in database. DBCC DBREINDEX is efficient over dropping and recreating indexes.

Execution of Stored Procedure sp_updatestats at the end of the Indexes process ensures updating stats of the database.

[b]Method 1: My Preference[/b]

[CODE]
USE MyDatabase
GO
EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)"
GO
EXEC sp_updatestats
GO
[/CODE]

[b]Method 2:[/b]

[CODE]
USE MyDatabase
GO
CREATE PROCEDURE spUtil_ReIndexDatabase_UpdateStats
AS
DECLARE @MyTable VARCHAR(255)
DECLARE myCursor
CURSOR FOR
SELECT table_name
FROM information_schema.tables
WHERE table_type = 'base table'
OPEN myCursor
FETCH NEXT
FROM myCursor INTO @MyTable
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT 'Reindexing Table: ' + @MyTable
DBCC DBREINDEX(@MyTable, '', 80)
FETCH NEXT
FROM myCursor INTO @MyTable
END
CLOSE myCursor
DEALLOCATE myCursor
EXEC sp_updatestats
GO
[/CODE]

Link to comment
Share on other sites

[b] Find All The User Defined Functions (UDF) in a Database[/b]

Following script is very simple script which returns all the User Defined Functions for particular database.

[CODE]
USE AdventureWorks;
GO
SELECT name AS function_name
,SCHEMA_NAME(schema_id) AS schema_name
,type_desc
FROM sys.objects
WHERE type_desc LIKE '%FUNCTION%';
GO
[/CODE]

Link to comment
Share on other sites

[b] Join Operations - Nested Loops[/b]


Microsoft has provided three join operations for use in SQL Server. These operations are Nested Loops, Hash Match and Merge Join. Each of these provides different benefits and depending on the workload can be a better choice than the other two for a given query. The optimizer will choose the most efficient of these based on the conditions of the query and the underlying schema and indexes involved in the query. This article is the first of three in a series to explore these three Join Operations.
[b] Nested Loops[/b]

The Nested Loops operation is sometimes referred to as the Nested Iteration. This is a join condition where there is an inner table that is looped through to meet the query criteria and compare it to each row of the outer table. A Nested Loop may be used for any of the following logical operations: inner join, left outer join, left semi-join, left anti-semi-join, cross apply, outer apply and cross join. It supports all join predicates.

In a Graphical Execution Plan, the Nested Loops Operator looks like the following image.

[img]http://www.sqlservercentral.com/Images/7726.jpg[/img]

When using the "set statistics profile" option, you will notice that the Nested Loops will appear in your results as shown in the following image.

[img]http://www.sqlservercentral.com/Images/7727.jpg[/img]
[b] In Action[/b]


How can we see the Nested Loops join in action? Let's do a little setup to demonstrate the Nested Loops join. First let's create a couple of tables and then populate those tables with the following scripts.
[CODE]SELECT TOP 10000
OrderID = IDENTITY(INT,1,1),
OrderAmt = CAST(ABS(CHECKSUM(NEWID()))%10000 /100.0 AS MONEY),
OrderDate = CAST(RAND(CHECKSUM(NEWID()))*3653.0+36524.0 AS DATETIME)
INTO dbo.Orders
FROM Master.dbo.SysColumns t1,
Master.dbo.SysColumns t2
go

CREATE TABLE [dbo].[OrderDetail](
[OrderID] [int] NOT NULL,
[OrderDetailID] [int] NOT NULL,
[PartAmt] [money] NULL,
[PartID] [int] NULL)

;
Insert Into OrderDetail (OrderID,OrderDetailID,PartAmt,PartID)
Select OrderID,
OrderDetailID = 1,
PartAmt = OrderAmt / 2,
PartID = ABS(CHECKSUM(NEWID()))%1000+1
FROM Orders[/CODE]
As you can see, I have created two tables for this simple example. Neither table has an Index or a Primary Key at this point. Let's run a query against these tables and see the results.
[CODE]Select O.OrderId, OD.OrderDetailID, O.OrderAmt, OD.PartAmt, OD.PartID, O.OrderDate
From Orders O
Inner Join OrderDetail OD
On O.OrderID = OD.OrderID[/CODE]
[img]http://www.sqlservercentral.com/Images/7728.jpg[/img]

Here, we see that the query results in a Hash Match at this point. I could force a Nested Loops if I were to use a query option such as shown in the following query.
[CODE]Select O.OrderId, OD.OrderDetailID, O.OrderAmt, OD.PartAmt, OD.PartID, O.OrderDate
From Orders O
Inner Join OrderDetail OD
On O.OrderID = OD.OrderID
-- This is a hash match for this example
Option (loop join) --force a loop join[/CODE]
This will provide us with a Nested Loops Join by forcing the optimizer to use one. However, this is not recommended unless you know for certain that the Nested Loops Join is better in this case. The optimizer takes into account the number of records as well as the indexes involved in the query.

Let's take it a step further now. I will put some Primary Keys (with Clustered Indexes on the Primary Keys) on the tables and then I will run the same query again and check the results again.
[CODE]ALTER TABLE dbo.Orders
ADD PRIMARY KEY CLUSTERED (OrderID)
ALTER TABLE dbo.OrderDetail
ADD PRIMARY KEY CLUSTERED (OrderID,OrderDetailID)[/CODE]
[img]http://www.sqlservercentral.com/Images/7729.jpg[/img]

As can be seen we now have a Merge Join. This Merge Join is happening due to the large number of records in both tables (relatively). The optimizer has chosen this operation as the fastest method to achieve the results. Notice that the execution plan now performs CI scans on both tables. Previously, we saw that the optimizer had performed table scans on both tables. ([i]Note: A CI scan is essentially a table scan. The use of a Clustered Index scan here is merely to denote the subtle difference in the graphical execution plan[/i].) We also see that relative cost has shifted somewhat from the Join Operator to the Index Scans.

I will now take this one step further. I will now change the query ever so slightly and you will see that we will get a Nested Loops Operator in place of the Merge Join.
[CODE]Select O.OrderId, OD.OrderDetailID, O.OrderAmt, OD.PartAmt, OD.PartID, O.OrderDate
From Orders O
Inner Join OrderDetail OD
On O.OrderID = OD.OrderID
Where O.OrderID < 10[/CODE]
[img]http://www.sqlservercentral.com/Images/7730.jpg[/img]

The change employed is to reduce the result set from one of the tables. In this case, I chose to return all records from both tables where Orders.OrderID was less than 10. With indexes being placed on the Join columns and the Orders. Orderid having a condition on it, we now reduce the number of operations and we also reduce the IO required to perform this query. This correlates with the following statement from MSDN:

[i]If one join input is small (fewer than 10 rows) and the other join input is fairly large and indexed on its join columns, an index nested loops join is the fastest join operation because they require the least I/O and the fewest comparisons.[/i] [url="http://msdn.microsoft.com/en-us/library/ms191426.aspx"][i]http://msdn.microsoft.com/en-us/library/ms191426.aspx[/i][/url]

Let's evaluate that from another perspective. Let's take a look at the IO statistics and execution time for the Merge Join and Hash Match in comparison to the Nested Loops, as shown to this point with the progression of the queries. ([i]This may not be a fair comparison at the moment. I intend this more of a demonstration for this example as the query became more optimized.[/i]) As a reminder, this applies specifically to only this particular example. Table 1 Merge Join Hash Match Nested Loops OrderDetail Physical Reads 0 0 0 OrderDetail Logical Reads 38 37 18 Orders Physical Reads 0 0 0 Orders Logical Reads 38 37 2 Elapsed Time 604 ms 775 ms 261 ms

From this we see that the logical reads on both tables and the Elapsed Time decrease substantially. In this case, we have fewer records and are using the indexes to query for the result set. Referring back to the Execution Plan, one sees that we are using Clustered Index seeks. In this example that I am using to this point, I only have a 1 to 1 relationship in the table, though I could have a 1 to many. I need to add a few more records to create a result set indicative of a one-to-many relationship. This is done through the following script.
[CODE]Insert into OrderDetail (OrderID,OrderDetailID,PartAmt,PartID)
Select OrderID,
OrderDetailID = 2,
PartAmt = OrderAmt / 2,
PartID = ABS(CHECKSUM(NEWID()))%1000+1
FROM Orders[/CODE]
Now I will re-run those stat comparisons. For brevity I will just compare the Merge Join and the Nested Loops Join. Table 2 Merge Join Nested Loops OrderDetail Physical Reads 0 0 OrderDetail Logical Reads 74 18 Orders Physical Reads 0 0 Orders Logical Reads 38 2 Elapsed Time 851 ms 1 ms

This further demonstrates how the Nested Loops is a better fit in this particular query. Due to the indexes and the where condition, the query optimizer can use a Nested Loops and the performance will be better. But what if I use a query hint and force the Merge Join query to become a nested loops join, how will that affect the outcome?
Table 3 Merge Join forced to Loops join via query hint Nested Loops OrderDetail Physical Reads 0 0 OrderDetail Logical Reads 21374 18 Orders Physical Reads 0 0 Orders Logical Reads 38 2 Elapsed Time 473 ms 1 ms

By trying to force the optimizer to use a Nested Loops where the query didn't really warrant it, we did not improve the query and it could be argued that we caused more work to be performed.

[b]Conclusion[/b]
The Nested Loops join is a physical operator that the optimizer employs based on query conditions. The Nested Loops can be seen in a graphical execution plan and can be employed when one of these logical joins is used: inner join, left outer join, left semi join, and left anti semi join. The Nested Loops Join is also more likely to be the choice of the optimizer when one table has fewer records (e.g. <=10) and there are good indexes on the join columns in the query.

Link to comment
Share on other sites

[size=6]Silent Truncation of SQL Server Data Inserts[/size]

[b] Problem[/b]

There are certain circumstances where SQL Server will silently truncate data, without providing an error or warning, before it is inserted into a table. In this tip we cover some examples of when this occurs.
[b] Solution[/b]

Normally, SQL Server will present an error on any attempt to insert more data into a field than it can hold:

[CODE] Msg 8152, Level 16, State 14, Line 2
String or binary data would be truncated.
The statement has been terminated.[/CODE]

SQL Server will not permit a silent truncation of data just because the column is too small to accept the data. But there are other ways that SQL Server can truncate data that is about to be inserted into a table that will not generate any form of error or warning.
[b] ANSI_WARNINGS turned off[/b]

By default, [url="http://msdn.microsoft.com/en-us/library/ms190368.aspx"]ANSI_WARNINGS[/url] are turned on, and certain activities such as creating indexes on computed columns or indexed views require that they be turned on. But if they are turned off, SQL Server will truncate the data as needed to make it fit into the column. The ANSI_WARNINGS setting for a session can be controlled by:

[CODE] SET ANSI_WARNINGS {ON|OFF}[/CODE]

[b] Passing through a variable[/b]

Unlike with an insert into a table, SQL Server will quietly cut off data that is being assigned to a variable, regardless of the status of ANSI_WARNINGS. For instance:

[CODE] declare @smallString varchar(5)
declare @testint int
set @smallString = 'This is a long string'
set @testint = 123.456
print @smallString
print @testint[/CODE]

Results in:

[CODE] This
123[/CODE]

This is because SQL Server is trying to do an [url="http://msdn.microsoft.com/en-us/library/aa224021%28v=sql.80%29.aspx"]implicit conversion[/url] of the data type to the type of the variable.

This can occasionally show itself in subtle ways since passing a value into a stored procedure or function assigns it to the parameter variables and will quietly do a conversion. One method that can help guard against this situation is to give any parameter that will be directly inserted into a table a larger datatype than the target column so that SQL Server will raise the error, or perhaps to then check the length of the parameter and have custom code to handle it when it is too long.

For instance, if a stored procedure will use a parameter to insert data into a table with a column that is varchar(10), make the parameter varchar(15). Then if the data that is passed in is too long for the column, it will rollback and raise a truncation error instead of silently truncating and inserting. Of course, that runs the risk of being misleading to anyone who looks at the stored procedures header information without understanding what was done.
[b] Unexpected Return Types[/b]

Many of SQL Server's built in functions determine their return type based on the parameter passed in. Some of these can determine their return type in subtle ways that can result in silent truncation. For instance, as Aaron Bertrand highlighted, [url="http://www.mssqltips.com/sqlservertip/2689/deciding-between-coalesce-and-isnull-in-sql-server/"]isnull[/url] uses the datatype of the first parameter in determining its return type. So, if the second parameter is longer it can silently truncate to match.
Similarly, [url="http://msdn.microsoft.com/en-us/library/ms177561.aspx"]string concatenation[/url] can result in a silent truncation for long strings. Normally, the return from string concatenation is long enough to hold the result even if none of the variables used would be large enough on its own. So, if two variables that are char(5) are used, the result would be ten characters long. However, unless at least one of the strings concatenated is of a large value type (such as varchar(max) or nvarchar(max)) it will truncate rather than return a large value type. For instance:

[CODE] declare @long1 varchar(max), @long2 varchar(max)
declare @short1 varchar(8000), @short2 varchar(8000)
set @short1 = replicate('0', 8000)
set @short2 = replicate('1', 8000)
set @long1 = @short1 + @short2 --neither is large value type, but @long1 is.
print len(@long1)
set @long2 = cast(@short1 as varchar(max)) + @short2 --one is large value type
print len(@long2)
/*Results:
8000
16000
*/[/CODE]

[b] ASCII Null values[/b]

ASCII Null character values (char(0)) values are used by many systems to convey the end of a text string. The main SQL Engine simply handles them literally and will permit a value to be stored in a variable with a literal null character. However, the results in SSMS from a select and some client applications that SQL might hand the data off to later may take the presence of the null character to indicate the end of the string even if it was successfully inserted and is technically in the table. For example:

[CODE] if object_id(N'testTbl', N'U') is not null
drop table testTbl
create table testTbl (col1 varchar(25))
insert into testTbl
values ('Start ' + char(0) + 'end')
select
col1, --Stops at the null
len(col1) as sLen, --Full length
replace(col1, char(0), '') as [NoNull] --Now the whole string
from testTbl
--But this prints out past the null
--Although displayed past the null, the results still include it
--Which can be carried over if copied and pasted.
print 'Start ' + char(0) + 'end'
/*Results:
(1 row(s) affected)
(1 row(s) affected)
Start end
*/[/CODE]


[img]http://www.mssqltips.com/tipimages2/2857_NullResults.jpg[/img]

Link to comment
Share on other sites

[b] [url="http://www.sqlservercentral.com/links/1427054/291839"]SQL Intersection in Las Vegas This Spring[/url][/b]

Interested candidates register avvandi

Link to comment
Share on other sites

[b] SSIS 6 Million Rows in 60 Seconds using standard laptop[/b]


SSIS is a very powerful tool and can be very fast however I frequently come across situations where performance could be improved by making several simple changes or minor hardware upgrades.

There are many factors that need to be taken into consideration and it would be nice to do a simple [i]health test[/i] that would show instantly if there is a problem. I generally work on several projects every year so I thought about building a SSIS Performance Framework that will help me identify problems quickly when I take new contract (below v0.001) but also help me to justify upgrades.

I thought the best way to do that is to generate data and load it using the same package on different environments and additionally compare it to typical laptop that uses one physical disk; in other words if I tell the client [b]your server is slower than my laptop [/b]than I have quite good chances to persuade the client to look into that or upgrade the server.

Below is video that shows simple load of 6 millions (almost 1GB) in 60 seconds on a laptop which gives us 100k rows per second.

http://www.youtube.com/watch?v=xCQTlpvNpi8&feature=player_embedded

p.s. I got 45 seconds after reading data from pen drive that I bought for 10£ (USB 2.0) so it was 133k rows per second

If you would like to generate the data than you can use DBGEN tool and below is video with instructions.

http://www.youtube.com/watch?v=xCQTlpvNpi8&feature=player_embedded

()>> ()>> ()>> }?. }?.

Link to comment
Share on other sites

Can we create another computed column that depends on already created computed columns?
Consider the following table creation
[CODE]CREATE TABLE COMPUTED_COL
(
A INT,
B INT,
c int,
d AS (A+B)
,e AS d+C
)
;[/CODE]
Will the table be created?



[b]Answer: [/b]No

[b]Explanation: [/b]This is not possible. You will receive the error:
Msg 1759, Level 16, State 0, Line 1
Computed column 'd' in table 'COMPUTED_COL' is not allowed to be used in another computed-column definition.

Ref: Computed columns - [url="http://www.sqlservercentral.com/links/1427054/291707"]http://msdn.microsoft.com/en-us/library/ms191250(v=sql.105).aspx[/url]

Link to comment
Share on other sites

[quote name='SonyVaio' timestamp='1359802690' post='1303210194']
nenu vidyardhi ni, job cheyatledu, plz help me
Naa dagara oka nalugu existing tables unnayi, aitay ipudu maa vodu em cheyamantundu antay oka table create cheyamantundu
ahaa table lo konni column names ichi avi petamanadu, idi chesina

1) ipudu nenu create chesina table lo 5 columns unnayi, vatilo 2 colums ki primary key petamantundu, general ga oka column ki PK petochu kada, 2nd column ki pk ela petali?
2) ipudu nenu create chesina table ki "the table has relationship with already existing oka table peru ichindu, ee relation ela set cheyali?
3) ee process SQL Statement ela tiyali nenu?
[/quote]


job chesthu unte chaalaaa basic Q mama idi(BUT U R NOT)....

1) Pk ni combined ga 16 columns meeda crte cheyyochu.... it is called "composite pk" or "compound key".
compound key--- key wth more than one attribute/column...

2 my knowledge pk's n fk's ni 3 types lo crte cheyyochu....

[code]

-------------------- tbl level....

create table table1
(
id int
name varchar(20)
constraint pk_tbl1_id primary key id
)


---------------- column level....

create table table1
(
id int constraint pk_tbl1_id primary key
name varchar(20)
)

above creation lo constraint name ivvakunda direct ga primary key mention cheyyochuuu.... "id int primary key" ani.... appudu eado oka name tho key create aipothadiiii....


------------------ by alter
alter table table1
add constraint pk_tbl1_id primary key id

[/code]



2) table relation ships ni foreign key tho represent cheyyochuuu....


[code]

------------------ tbl lvl.....

create table2
(
2id int,
1id int,
fname varchar(20)
constraint pk_tbl2_2id primary key id,
constraint fk_tbl2_1id foreign key (1id) references table1(1id)
)



---------------------column lvl.....

create table2
(
2id int constraint pk_tbl2_2id primary key,
1id int constraint fk_tbl2_1id foreign key references table1(1id),
fname varchar(20)
)


alter table table2
add constraint constraint fk_tbl2_id foreign key references table1(1id)


[/code]




3) Q ardam kaale mama....

Link to comment
Share on other sites

[quote name='Kaarthikeya' timestamp='1359811769' post='1303210302']
[b] SSIS 6 Million Rows in 60 Seconds using standard laptop[/b]


SSIS is a very powerful tool and can be very fast however I frequently come across situations where performance could be improved by making several simple changes or minor hardware upgrades.

There are many factors that need to be taken into consideration and it would be nice to do a simple [i]health test[/i] that would show instantly if there is a problem. I generally work on several projects every year so I thought about building a SSIS Performance Framework that will help me identify problems quickly when I take new contract (below v0.001) but also help me to justify upgrades.

I thought the best way to do that is to generate data and load it using the same package on different environments and additionally compare it to typical laptop that uses one physical disk; in other words if I tell the client [b]your server is slower than my laptop [/b]than I have quite good chances to persuade the client to look into that or upgrade the server.

Below is video that shows simple load of 6 millions (almost 1GB) in 60 seconds on a laptop which gives us 100k rows per second.

[media=]http://www.youtube.com/watch?v=xCQTlpvNpi8&feature=player_embedded[/media]

p.s. I got 45 seconds after reading data from pen drive that I bought for 10£ (USB 2.0) so it was 133k rows per second

If you would like to generate the data than you can use DBGEN tool and below is video with instructions.

[media=]http://www.youtube.com/watch?v=xCQTlpvNpi8&feature=player_embedded[/media]

()>> ()>> ()>> }?. }?.
[/quote]


S gud 1 ..... nen training tme lo use chesa oka saari.... ma trainer cheppadu idi....

Link to comment
Share on other sites

[quote name='mtkr' timestamp='1359815019' post='1303210365']
job chesthu unte chaalaaa basic Q mama idi(BUT U R NOT)....

1) Pk ni combined ga 16 columns meeda crte cheyyochu.... it is called "composite pk" or "compound key".
compound key--- key wth more than one attribute/column...

2 my knowledge pk's n fk's ni 3 types lo crte cheyyochu....

[code]

-------------------- tbl level....

create table table1
(
id int
name varchar(20)
constraint pk_tbl1_id primary key id
)


---------------- column level....

create table table1
(
id int constraint pk_tbl1_id primary key
name varchar(20)
)

above creation lo constraint name ivvakunda direct ga primary key mention cheyyochuuu.... "id int primary key" ani.... appudu eado oka name tho key create aipothadiiii....


------------------ by alter
alter table table1
add constraint pk_tbl1_id primary key id

[/code]



2) table relation ships ni foreign key tho represent cheyyochuuu....


[code]

------------------ tbl lvl.....

create table2
(
2id int,
1id int,
fname varchar(20)
constraint pk_tbl2_2id primary key id,
constraint fk_tbl2_1id foreign key (1id) references table1(1id)
)



---------------------column lvl.....

create table2
(
2id int constraint pk_tbl2_2id primary key,
1id int constraint fk_tbl2_1id foreign key references table1(1id),
fname varchar(20)
)


alter table table2
add constraint constraint fk_tbl2_id foreign key references table1(1id)


[/code]




3) Q ardam kaale mama....
[/quote]

ee process ni query rayakunda, mamuluga manual ga cheyalema?
cheyochu antay steps enti?

Link to comment
Share on other sites

[quote name='SonyVaio' timestamp='1359829184' post='1303211563']
ee process ni query rayakunda, mamuluga manual ga cheyalema?
cheyochu antay steps enti?
[/quote]
s cheyyochu.... tbls meeda direct ga rt clk chesi processfollow avvochuu

Link to comment
Share on other sites

×
×
  • Create New...