Monday, February 28, 2011

How to call/invoke .Rdl file into the new window

Create a PassingURLParameter variable and assign the value into that for REPORT PATH GO TO the report Cell--> Go to its properties and select Action Property----> then Write down the following code into that.

="javascript:void('"+Globals!ReportServerUrl +Parameters!PassingURLParameterName.Value+"&Parval="+Right(Fields!parameter.UniqueName,Len(Fields!parametername.UniqueName )-23)+"&rc:Parameters=false&rc:ToolBar=false&rc:addressbar=false','welcome','width=310,Height=250,location=yes,status=no,toolbar=false,Vscrollbar=Yes,Hscrollbar=No,menubar=no,screenX=500,screenY=200,left=600,top=200'))"

Sunday, February 27, 2011

Script to Drop the Constraints

'ALTER TABLE [' + OBJECT_NAME(f.parent_object_id) +
'] DROP CONSTRAINT [' + + ']' + CHAR(10) +
'GO ' + CHAR(10)
FROM sys.foreign_keys AS f
INNER JOIN sys.foreign_key_columns AS fc
ON f.OBJECT_ID = fc.constraint_object_id

Thursday, February 24, 2011

SQL Server Data Types

Character strings:
char(n) Fixed-length character string. Maximum 8,000 characters
varchar(n) Variable-length character string. Maximum 8,000 characters
varchar(max) Variable-length character string. Maximum 1,073,741,824 characters
text Variable-length character string. Maximum 2GB of text data

Unicode strings:
nchar(n) Fixed-length Unicode data. Maximum 4,000 characters
nvarchar(n) Variable-length Unicode data. Maximum 4,000 characters
nvarchar(max) Variable-length Unicode data. Maximum 536,870,912 characters
ntext Variable-length Unicode data. Maximum 2GB of text data

Binary types:
Allows 0, 1, or NULL
binary(n) Fixed-length binary data. Maximum 8,000 bytes
varbinary(n) Variable-length binary data. Maximum 8,000 bytes
varbinary(max) Variable-length binary data. Maximum 2GB image Variable-length binary data. Maximum 2GB

Number types:
Allows whole numbers from 0 to 255
smallint Allows whole numbers between -32,768 and 32,767
int Allows whole numbers between -2,147,483,648 and 2,147,483,647
bigint Allows whole numbers between -9,223,372,036,854,775,808 and 9,223,372,036,854,775,807
decimal(p,s) Fixed precision and scale numbers.
Allows numbers from -10^38 +1 to 10^38 –1.
The p parameter indicates the maximum total number of digits that can be stored (both to the left and to the right of the decimal point). p must be a value from 1 to 38. Default is 18.
The s parameter indicates the maximum number of digits stored to the right of the decimal point. s must be a value from 0 to p. Default value is 0
numeric(p,s) Fixed precision and scale numbers.
Allows numbers from -10^38 +1 to 10^38 –1.
The p parameter indicates the maximum total number of digits that can be stored (both to the left and to the right of the decimal point). p must be a value from 1 to 38. Default is 18.
The s parameter indicates the maximum number of digits stored to the right of the decimal point. s must be a value from 0 to p. Default value is 0
smallmoney Monetary data from -214,748.3648 to 214,748.3647
money Monetary data from -922,337,203,685,477.5808 to 922,337,203,685,477.5807
float(n) Floating precision number data from -1.79E + 308 to 1.79E + 308.
The n parameter indicates whether the field should hold 4 or 8 bytes. float(24) holds a 4-byte field and float(53) holds an 8-byte field. Default value of n is 53.
real Floating precision number data from -3.40E + 38 to 3.40E + 38

Date types:
From January 1, 1753 to December 31, 9999 with an accuracy of 3.33 milliseconds
datetime2 From January 1, 0001 to December 31, 9999 with an accuracy of 100 nanoseconds
smalldatetime From January 1, 1900 to June 6, 2079 with an accuracy of 1 minute
date Store a date only. From January 1, 0001 to December 31, 9999
time Store a time only to an accuracy of 100 nanoseconds
datetimeoffset The same as datetime2 with the addition of a time zone offset
timestamp Stores a unique number that gets updated every time a row gets created or modified. The timestamp value is based upon an internal clock and does not correspond to real time. Each table may have only one timestamp variable

Other data types:
Stores up to 8,000 bytes of data of various data types, except text, ntext, and timestamp
uniqueidentifier Stores a globally unique identifier (GUID)
xml Stores XML formatted data. Maximum 2GB
cursor Stores a reference to a cursor used for database operations
table Stores a result-set for later processing

Wednesday, February 23, 2011

3 different ways to insert records in SQL Server 2008

insert into EMP (EMPID, FirstName,LastName)
values (11,'amit','kumar')
insert into EMP (EMPID, FirstName,LastName)
values (22, 'ajay','kumar')


insert into EMP (EMPID, FirstName,LastName)
Select 11,'amit','kumar'
Select 22, 'ajay','kumar'


insert into EMP (EMPID, FirstName,LastName)
(22, 'ajay','kumar')

Monday, February 21, 2011

To know about your database details

select * from sysfiles


select * from sys.databases where name ='DatabaseName'

Wednesday, February 16, 2011

Converting RTF to TXT in SSRS - Removing Funny Letters

For Example: We have blog field in a table and from there we are getting some different kind of data like...

{\rtf1\ansi\ansicpg1252\deff0\deflang1033{\fonttbl{\f0\fnil\fcharset0 Microsoft Sans Serif;}{\f1\fnil Microsoft Sans Serif;}} \viewkind4\uc1\pard\f0\fs17

First Way:

public function TrimJDETags(strText as string) as String
strText = mid(strText,instr(strtext,"\fs20")+6)
strText = replace(strText, "\fs22", "")
strText = replace(strText, "d", "")
strText = replace(strText, "\ltrpar", "")
strText = replace(strText, "}", "")
strText = replace(strText, "\pard", "")
strText = replace(strText, "\slmult1", "")
strText = replace(strText, "\sl276", "")
strText= replace(strText," ","")
strText = replace(strText,"^","")
strText = replace(strText, "\par }","")
strText = replace(strText, "\par", vbcr)
strText = replace(strText, "\tab", vbtab)
strText = replace(strText, "\b0" , "")
strText = replace(strText, "\ul" , "")
strText = replace(strText, "\b", vblf)
strText = replace(strText, "\cf0", "")
strText = replace(strText, "\cf1", "")
strText = replace(strText, "\i0", "")
strText = replace(strText, "\i", "")
strText = replace(strText, "\f0", "")
strText = replace(strText, "\f1", "")
strText = replace(strText, "\f20", "")
strText = replace(strText, "\f24", "")
strText = replace(strText, "\f2", "")
strText = replace(strText, "\f3", "")
strText = replace(strText, "\fs20", "")
strText = replace(strText, "\fs24", "")
strText = replace(strText, "\super", "")
strText = replace(strText, "\nosuper", "")
strText = replace(strText, "\cf2none" , "")
strText = replace(strText, "\cf2" , "")
strText = replace(strText, "\cf3" , "")
strText = replace(strText, "}" , "")
TrimJDETags = strText
end function

Second Method:
But for this Function need to add Assembly System.Windows.Form and also need to modify the config file rssrvpolicy (From Execution to Full Trust) in Report server folder.

Code Modified Section:

codegroup class="UnionCodeGroup" description="This code group grants default permissions for code in report expressions and Code element. " version="1" permissionsetname="FullTrust" name="Report_Expressions_Default_Permissions" imembershipcondition class="StrongNameMembershipCondition" version="1" publickeyblob="0024000004800000940000000602000000240000525341310004000001000100512C8E872E28569E733BCB123794DAB55111A0570B3B3D4DE3794153DEA5EFB7C3FEA9F2D8236CFF320C4FD0EAD5F677880BF6C181F296C751C5F6E65B04D3834C02F792FEE0FE452915D44AFE74A0C27E0D8E4B8D04EC52A8E281E01FF47E7D694E6C7275A09AFCBFD8CC82705A06B20FD6EF61EBBA6873E29C8C0F2CAEDDA2">
codegroup class="FirstMatchCodeGroup" description="This code group grants MyComputer code Execution permission. " version="1" permissionsetname="FullTrust"
imembershipcondition class="ZoneMembershipCondition" version="1" zone="MyComputer" rtf="input.Trim">

Need to Use this Function in the Report:
Public Shared Function ConvertRtfToText(ByVal input As String) As String
Dim returnValue As String = String.Empty
Using converter As New System.Windows.Forms.RichTextBox()
converter.Rtf = input.Trim
returnValue = converter.Text
End Using
Return returnValue
catch e as exception
Return e.message
End try
End Function

Monday, February 14, 2011

SSIS Configurations

The Real task of creating SSIS packages lies when we are about to deploy them to certain environment, the point where most of the hard coded values needs a real exercise. In order to make this task easy SSIS provides us a concept called Configurations. In this blog we will guide you through the various kinds of configurations available in SSIS and also lets you know when to use what.

Everything goes fine when we design, develop and deploy the packages on/ to our local environment (server). The real problems arise when we try to deploy the same to other (Remote) environments. Since every Question has its own Answer we too have a solution (Configuration Files) for this kind of situations. Generally most of us during development hard code the connection strings, file paths, package paths etc. which will create a problem when they are moved to some other environment. So, by using Configuration file it becomes easy where we need to change at only one place instead of changing multiple places inside a solution. This configuration file in turn adds a dynamic execution feature to our SSIS Packages.
A configuration is a name-value pair. The name is a property within a package that needs to be changed, and the value is the value to be assigned to that property.
An Integration Services package consists of many objects for ex: Containers (this includes all tasks, as well as other containers such as the ForEach Loop Enumerator), Variables, Connection Managers, Data Flow Components etc. However you cannot apply a configuration to an object; you can only apply a configuration to a property of an object. For example, if you wanted to change the value returned when referencing a package variable, you would apply the configuration to the Value property of the variable, not to the variable object itself.
For Ex: \Package.Variables[User::MyVar].Properties[Value]

Configuration Types
There five basic types of Integration Services configurations.
1. XML Configuration File
This is the most commonly used configuration type. An XML file, which by default has the .dtsConfig suffix, is created which contains one or more configurations.
Advantage: They can be easily deployed onto multiple environments simply by copying the file.

Disadvantage: The path to the file must be hard-coded within the package. However this requirement can be alleviated by using indirect configurations. “Indirect configurations” is a term used to describe the practice of using an operating system environment variable to redirect the Integration Services execution engine to the location of the XML configuration file.
A configuration stored in an XML configuration file stores both the property path and the value outside the package.

2. Environment Variable Configuration
An environment variable configuration takes the value for a property from whatever is stored in a named environment variable. An environment variable configuration stores the property path inside the package and the value outside the package.
Note that an environment variable configuration by itself is not an example of an indirect configuration.

3. Parent Package Configuration
Parent package configurations are used to fetch a value from a variable in a calling package. Just like environment variable configurations, a parent package configuration stores the property path inside the package and the value outside the package.

4. Registry Configuration
The value to be applied to a package property is stored in a registry entry. Just like environment variable configurations and parent package configurations, a registry configuration stores the property path inside the package and the value outside the package.

5. SQL Server Configuration
SQL Server configurations are stored in a SQL Server table. The table can have any name you like, and can be in any database on any server that you like. If you are using a SQL Server configuration, then you can and should make it an indirect configuration. As explained earlier in this article, when using indirect configurations, an environment variable stores the location of the configurations themselves.

A configuration stored in SQL Server stores both the property path and the value outside the package.

Create a user variable. When you add configurations to a package, you are basically exposing package properties and allowing them to be updated with a new value that comes from a file, table, environment variable, registry key or parent package variable. Package configurations are disabled by default, and have to be enabled on each package in which you want to use them. You can enable package configurations by choosing the Package Configurations option on the SSIS menu.
• Select Enable Package Configurations Option.
• Click on Add button.
You can choose the configuration type, which in this case is an XML configuration file. Then you have two options for providing the location of the configuration file.

• The first option is known as the direct method, where the path to the file is stored inside of the package definition. This option works fine if you don’t have to deploy the package to different machines, or if the path provided is valid on every machine the package will be deploy to.
• The second method is called indirect, where you can pick the name of a Windows environment variable that would hold the path to the configuration file. This method is ideal when you have to deploy packages to different servers, and when the location of the XML file is subject to change, since you would only need to change the value of the environment variable where the package is deployed. Notice that the environment variable must exist prior to creating the package configuration, and in some instances you may have to close and re-open the Business Intelligence Development Studio (BIDS) session in order to get the name of the environment variable in the dropdown list.
When you press the Finish button, the configuration file is created and you should be able to see a new entry in the package configuration organizer.

An XML configuration file has two parts. The header contains metadata about the file itself, like creator, the name and ID of the package that was used when creating the file, and the creation date and time. The configuration section is where the path to the properties being updated and the configuration values to be used are stored.
In this example, we decide to provide “Run-time value” as new value for the variable being configured. For that we open the file in a text editor and change the line:
During Development
Run-time value from config file

Environment Variable Configurations:
With this type of package configuration, you have to create an environment variable for each package property you intend to update, and place the configuration value as the value of the environment variable.
We have to choose the environment variable to be used from a drop down list and next it lets you choose which object property is going to be affected by the configuration we are creating.

Registry Entry Configurations:
This type will let us store the configuration values in Windows registry entries similar to environment variable configurations.
After we select a registry key configuration type from the dropdown list, we have to choose the method to be used. The first option is the direct method, where the wizard expects a valid registry key name that exists under the Windows registry HKEY_CURRENT_USER key. The second option is the indirect method, where you provide the name of an environment variable that in turns contains the registry key name to be used by the configuration. The indirect method gives you the flexibility to change the name of the registry key or to point to a different one by updating the environment variable value.
For Ex: If we want to use a registry key that exists directly under HKEY_CURRENT_USER,
Expected value is: SampleSSISProjConfig

If you create a registry key to be used by the configuration that is not directly under HKEY_CURRENT_USER key, as shown below, then the wizard expects this value: SampleSSISProjConfig\config1. Next we select the property object we want to update trough the configuration.
There are a few other things we need to keep in mind when using this type of configurations:
• The wizard does not create or modify the registry keys; the registry keys must be manually created and edited.
• The registry keys must exist, either directly or indirectly, under HKEY_CURRENT_USER, and the configuration value has to be store in an entry called ‘value’.
• Registry keys under HKEY_CURRENT_USER are visible only to the user that created them, hence, the package can apply the configurations only when it is executed using the credentials of that user. If you execute the package via SQL Server Agent, a workaround is to set up a proxy account in SQL Server Agent. Make sure that you consider all the implications of this behavior, as this may complicate your deployment scenarios.
• You have to use one registry key per configuration value. If you are configuring multiple properties, consider using SQL Server or XML file configurations instead.
• You do not need to close and reopen the BIDS session each time you create or make changes to the registry keys being used by the configuration. This is an advantage over environment variable configuration type.
• You need access to modify the Windows registry settings on each machine were you intend to deploy the package. This makes this configuration type risky.
• Be extremely careful when using the registry editor, as any error could cause serious problems to the system. Always create a backup of the registry before making changes.

Parent Package Variable Configurations:
When we execute a package (the child) from another package (the parent) via the Execute Package task, we can use Parent Package Variable configurations in the child package to pass variable values from the parent. In the Package Configuration Wizard, in the child package, we have to specify the name of the variable (that exists in the parent) that holds the desired configuration value.
Notice that the child package is unaware of the existence of the parent package, and the name of the variable that we enter is not validated when we create the configuration. When using the direct method, we have to type the variable name exactly as it appears in the parent package. Alternatively, we can select an environment variable that contains the name of the parent package variable, thus adding the flexibility the indirect method offers.
The next step of the wizard allows us to select the property to be updated, in the same way as when using registry entry or environment variable configurations.
• By design, parent package variables are always applied last, regardless of where we place them in the configurations organizer.
• This type of configuration is applied only when the child package is executed via the Execute Package task.
• SSIS variable names are case-sensitive. Always we need to make sure that both spelling and casing of the variable name in the configuration entry are identical to the ones in the parent package.
• Standalone execution and unit testing of child packages using this type of configuration may be challenging when the child package does not run in the context of the parent. A simple but effective workaround is to always apply this type of configurations to variables in the child package, and then use expressions in the objects that need to be updated (connection managers, server names, and so forth) based on those variables. This way, if you need to run the child package without the parent, you can manually edit the variables in the child package in BIDS, or via the /SET option of the DTExec utility.
• Each package configuration entry can only affect one property in the package being configured.

SQL Server Configurations:
This configuration type offers almost the same level of flexibility and functionality as XML configuration files, with the difference that configuration information is stored in a SQL Server table instead of an XML file. The table can be created in any database that is accessible by the package at execution time. You can use the Package Configuration Wizard to create the table. This is the default structure of the table:
CREATE TABLE [dbo].[SSIS Configurations]
ConfigurationFilter NVARCHAR(255) NOT NULL,
ConfiguredValue NVARCHAR(255) NULL,
PackagePath NVARCHAR(255) NOT NULL,
ConfiguredValueType NVARCHAR(20) NOT NULL

Above fields are used as follows:
• ConfigurationFilter: This field is used by SSIS to indentify a set of property/values pairs that are part of the same configuration entry in the Package Configurations Organizer.
• ConfigurationValue: It stores the value that is used to update the package property specified in PackagePath column.
• PackagePath: The path that point to the property being configured.
• ConfiguredValueType: the SSIS data type of the property being configured.
When we create SQL Server package configurations, we first have to choose which method we would use to provide the connection information to the configuration table

With the direct method, the connection information, configuration table and filter are stored inside of the package. The indirect method instead allows storing that information in an environment variable. Notice that both methods use an SSIS connection manager and its connection string is hard-coded inside of the package. Therefore, if we use this configuration type, it is a good practice to ensure that the connection string in this connection manager can be updated from an external source. A common approach is to use a separate configuration (XML, Registry Key or environment variable) to update the connection manager when required.
The next step in the configuration wizard lets us choose the set of properties to be targeted by the configuration being created.

• SQL Server configurations are very flexible, as you can store multiple configuration properties/values in the same table and Multiple packages can use the same configuration table and filter.
• We can have a configuration filter for each entry in the configuration table, or we can use the same filter to group a set of entries. If we use the second approach, we need to make sure that all properties under the selected filter exist in the package being configured. Otherwise warning messages are generated for those properties in the configuration filter that do not exist in the target package.
• We can use existing database security and backup policies to protect the configuration table contents.
• A drawback of this configuration type is that it relies on an SSIS connection manager to get access to the configuration table, but does not have built-in support for updating its connection string. In other words, the connection string that points to the configuration table is hard-coded inside of the package, and we have to take extra steps in order to update it (for example, through an extra package configuration placed at the top of the Package Configurations Organizer, or in SQL Server 2008, by using the /CONN option of DTExec).
• The /CONN option of DTExec utility can be used to override the connection string used by the configuration only in SSIS 2008. This is because SQL Server 2008 Integration Services reloads configurations after applying DTExec command line options.
• The configuration wizard can create the table and insert the required rows when they do not exist, but only if the direct method is chosen. The wizard never includes any sensitive data in the ConfiguredValue column of the table, and you would need to manually update the table to add the required sensitive data.
• If we use the direct method, and provide an existing configuration filter, the wizard will prompt us to either reuse the configuration information in the table or to override it with the property values from the current package.
• When we use the indirect method, the environment variable must contain the name of the configuration connection manager, the name of the configuration table and the configuration filter. The syntax should be: “ConfigurationManagerName”;”Schema.ConfigurationTableName”;’ConfigurationFilter”

Additional Information:
1. .dtsConfig file is not a configuration, it is a collection of one or more configurations.
2. You can save some development time by using package templates. If you anticipate creating a large number of packages that use a common set of package configurations, you can simply include the required package configurations in a template, and then use the template when creating new packages.
3. The data flow task in SSIS does not allow changes to the pipeline structure at run time (number of columns, column names, and data types), and package configurations should not be used to attempt such changes in dataflow pipeline metadata, since it would cause validation errors in the package.
4. Multiple packages can use the same XML configuration file.
5. The package configuration wizard can create the XML file only when the direct method is used. If you decide to use the indirect method, consider creating the configuration by using the direct method first, and once the file is created, change the method to the indirect one.
6. When creating the file, the wizard does not include any sensitive information (such as passwords in connection strings) in the configuration file, and you have to edit the configuration file to add such information manually..
7. The Package Configurations Organizer does not show any value for Target Object and Target Property columns for XML file based configurations, as the file may contain multiple configurations
8. The environment variable(s) must exist on all machines where the package is going to be deployed.This is because the name of the environment variable is stored inside of the package.
9. The configuration wizard does not create the environment variables, and they are visible to the wizard only if they were created prior to opening the current BIDS session.

Behavior Changes Related to Package Configurations:
In Short: In SQL Server 2005 Integration Services, all package configurations, except Parent Package Variable configurations, are applied before applying the options specified with the DTExec command. Parent Package Variables are applied after applying DTExec options.
In SQL Server 2008 Integration Services, package configurations are applied twice, before and after applying the options of the DTExec command prompt utility. This should be seen as an improvement, since you can now use the /SET, /CONF or /CONN options of the DTExec utility to alter the original definition of the configurations. For example, you can now use the /CONN option to alter the connection manager being used in SQL Server configurations - something that was not possible with the 2005 version.
In Detail: In SQL Server 2005 Integration Services, events occur in the following order:
1. The utility first applies the design-time configurations. (Design-time configurations are the configurations that you created and associated with the package when you designed the package in BI Development Studio.)
2. The utility then applies the run-time options that you specified on the command line when you started the utility.
In SQL Server 2008 Integration Services, events occur in the following order:
1. The utility first applies the design-time configurations.
2. The utility then applies the run-time options that you specified on the command line when you started the utility.
3. Finally, the utility reloads and reapplies the design-time configurations.
Because the events occur in this order, you can dynamically redirect a configuration by using the /SET command-line option to change the configuration string.

Script to populate DimDate Data


SELECT @dtStartDate = '1/1/2010'
SELECT @dtEndDate = '12/31/2010'


SELECT @dtCurrDate = ISNULL (@dtCurrDate, @dtStartDate)
Declare @datedim TABLE
datekey DATETIME
,datewithdayname NVARCHAR(500)
,curryear INT
,calyear NVARCHAR(100)
,FirstDayOfMonth DATETIME
,MonthYear NVARCHAR(500)
,DayOfTheYear INT
,DayOfTheYearname NVARCHAR(50)
,day_of_the_quarter INT
,day_of_the_quarter_name NVARCHAR(200)
,day_of_month INT
,day_of_month_name NVARCHAR(250)
,Month_of_year INT
,Month_of_year_name NVARCHAR(250)
,Month_of_quarter INT
,Month_of_quarter_name NVARCHAR(250)
,quarter_in_year INT
,quarter_in_year_name NVARCHAR(205)
,semester_in_year int
,semester_in_year_name NVARCHAR(15)
,month_of_semester int
,month_of_semester_name NVARCHAR(25)
,day_of_the_semester int
,day_of_the_semester_name NVARCHAR(30)

WHILE @dtCurrDate <= @dtEndDate
INSERT INTO @datedim
SELECT @dtCurrDate AS datekey
,DATENAME(DW, @dtCurrDate)+', '+DATENAME(month, @dtCurrDate)
+ ' ' +CONVERT(NVARCHAR (2) , DAY(@dtCurrDate)) + ' ' +Convert (NVARCHAR(4),YEAR(@dtCurrDate) ) as datewithdayname
,YEAR(@dtCurrDate) as curryear
, ' Calendar'+Convert (NVARCHAR(4),YEAR(@dtCurrDate) ) as calyear
,CASE WHEN DATEPART(qq, @dtCurrDate) IN (1,2,3) THEN Convert(DATETIME,'01/01/'+ CONVERT(varchar(4),YEAR(@dtCurrDate)))
WHEN DATEPART(qq, @dtCurrDate) IN (4,5,6) THEN Convert(DATETIME,'04/01/'+ CONVERT(varchar(4),YEAR(@dtCurrDate)))
WHEN DATEPART(qq, @dtCurrDate) IN (7,8,9) THEN Convert(DATETIME,'07/01/'+ CONVERT(varchar(4),YEAR(@dtCurrDate)))
WHEN DATEPART(qq, @dtCurrDate) IN (10,11,12) THEN Convert(DATETIME,'10/01/'+ CONVERT(varchar(4),YEAR(@dtCurrDate)))
END as FirstDayOfQtr
,'Quarter '+ convert (NVARCHAR,DATEPART(qq, @dtCurrDate))+','+ convert (NVARCHAR,YEAR(@dtCurrDate)) as QTYYear
,CONVERT(DATETIME,Convert(NVARCHAR(2),Month(@dtCurrDate))+'/01/'+Convert(NVARCHAR(4),Year(@dtCurrDate))) as FirstDayOfMonth
,DATENAME(month,@dtCurrDate)+Convert(NVARCHAR(4),Year(@dtCurrDate)) as MonthYear
,DATEPART(DAYOFYEAR,@dtCurrDate) as DayOfTheYear
,'Day '+Convert(NVARCHAR(3),DATEPART(DAYOFYEAR,@dtCurrDate)) as DayOfTheYearname
--day of the quarter
,CASE WHEN Month( @dtCurrDate) IN (1,2,3) THEN DATEDIFF(DD,'1/1/'+Convert(NVARCHAR,YEAR(@dtCurrDate)),@dtCurrDate)+1
WHEN Month( @dtCurrDate) IN (4,5,6) THEN DATEDIFF(DD,'4/1/'+Convert(NVARCHAR,YEAR(@dtCurrDate)),@dtCurrDate)+1
WHEN Month( @dtCurrDate) IN (7,8,9) THEN DATEDIFF(DD,'7/1/'+Convert(NVARCHAR,YEAR(@dtCurrDate)),@dtCurrDate)+1
WHEN Month( @dtCurrDate) IN (10,11,12) THEN DATEDIFF(DD,'10/1/'+Convert(NVARCHAR,YEAR(@dtCurrDate)),@dtCurrDate)+1
END as day_of_the_quarter
--day of the quarter name
,CASE WHEN Month( @dtCurrDate) IN (1,2,3) THEN 'Day '+Convert(NVARCHAR,DATEDIFF(DD,'1/1/'+Convert(NVARCHAR,YEAR(@dtCurrDate)),@dtCurrDate)+1)
WHEN Month( @dtCurrDate) IN (4,5,6) THEN 'Day '+Convert(NVARCHAR,DATEDIFF(DD,'4/1/'+Convert(NVARCHAR,YEAR(@dtCurrDate)),@dtCurrDate)+1)
WHEN Month( @dtCurrDate) IN (7,8,9) THEN 'Day '+Convert(NVARCHAR,DATEDIFF(DD,'7/1/'+Convert(NVARCHAR,YEAR(@dtCurrDate)),@dtCurrDate)+1)
WHEN Month( @dtCurrDate) IN (10,11,12) THEN 'Day '+Convert(NVARCHAR,DATEDIFF(DD,'10/1/'+Convert(NVARCHAR,YEAR(@dtCurrDate)),@dtCurrDate)+1)
END as day_of_the_quarter_name
,Day(@dtCurrDate) as day_of_month
,'Day '+CONVERT(NVARCHAR,Day(@dtCurrDate)) as day_of_month_name

,Month(@dtCurrDate) as Month_of_year
,'Month '+CONVERT(NVARCHAR,Month(@dtCurrDate)) as Month_of_year_name
,CASE WHEN Month( @dtCurrDate) IN (1,2,3) THEN MONTH(@dtCurrDate)
WHEN Month( @dtCurrDate) IN (4,5,6) THEN MONTH(@dtCurrDate)-4+1
WHEN Month( @dtCurrDate) IN (7,8,9) THEN MONTH(@dtCurrDate)-7+1
WHEN Month( @dtCurrDate) IN (10,11,12) THEN MONTH(@dtCurrDate)-10+1
END as month_of_quarter
,CASE WHEN Month( @dtCurrDate) IN (1,2,3) THEN 'Month '+CONVERT(NVARCHAR, MONTH(@dtCurrDate))
WHEN Month( @dtCurrDate) IN (4,5,6) THEN 'Month '+CONVERT(NVARCHAR, MONTH(@dtCurrDate)-4+1)
WHEN Month( @dtCurrDate) IN (7,8,9) THEN 'Month '+CONVERT(NVARCHAR, MONTH(@dtCurrDate)-7+1)
WHEN Month( @dtCurrDate) IN (10,11,12) THEN 'Month '+CONVERT(NVARCHAR, MONTH(@dtCurrDate)-10+1)
END as month_of_quarter_name

,DATEPART(qq, @dtCurrDate) as quarter_in_year
,'Quarter '+Convert(NVARCHAR,DATEPART(qq, @dtCurrDate)) as quarter_in_year_name
WHEN Month(@dtCurrDate)<10 THEN '0'+CONVERT(NVARCHAR,Month(@dtCurrDate))
else CONVERT(NVARCHAR,Month(@dtCurrDate))
WHEN day(@dtCurrDate)<10 THEN '0'+CONVERT(NVARCHAR,day(@dtCurrDate))
else CONVERT(NVARCHAR,day(@dtCurrDate))
WHEN year(@dtCurrDate)<10 THEN '0'+CONVERT(NVARCHAR,year(@dtCurrDate))
else CONVERT(NVARCHAR,year(@dtCurrDate))
WHEN day(@dtCurrDate)<10 THEN '0'+CONVERT(NVARCHAR,day(@dtCurrDate))
else CONVERT(NVARCHAR,day(@dtCurrDate))
WHEN Month(@dtCurrDate)<10 THEN '0'+CONVERT(NVARCHAR,Month(@dtCurrDate))
else CONVERT(NVARCHAR,Month(@dtCurrDate))
WHEN year(@dtCurrDate)<10 THEN '0'+CONVERT(NVARCHAR,year(@dtCurrDate))
else CONVERT(NVARCHAR,year(@dtCurrDate))
WHEN year(@dtCurrDate)<10 THEN '0'+CONVERT(NVARCHAR,year(@dtCurrDate))
else CONVERT(NVARCHAR,year(@dtCurrDate))
WHEN day(@dtCurrDate)<10 THEN '0'+CONVERT(NVARCHAR,day(@dtCurrDate))
else CONVERT(NVARCHAR,day(@dtCurrDate))
WHEN Month(@dtCurrDate)<10 THEN '0'+CONVERT(NVARCHAR,Month(@dtCurrDate))
else CONVERT(NVARCHAR,Month(@dtCurrDate))

, CASE WHEN Month(@dtCurrDate) BETWEEN 1 AND 6 THEN 1 ELSE 2 END AS semester_in_year
,'Semester '+ CASE WHEN Month(@dtCurrDate) BETWEEN 1 AND 6 THEN '1' ELSE '2' END AS semester_in_year
,CASE WHEN Month( @dtCurrDate) IN (1,2,3,4,5,6) THEN MONTH(@dtCurrDate)
WHEN Month( @dtCurrDate) IN (7,8,9,10,11,12) THEN MONTH(@dtCurrDate)-7+1
END as month_of_semester
,CASE WHEN Month( @dtCurrDate) IN (1,2,3,4,5,6) THEN 'Month ' + Cast(MONTH(@dtCurrDate) AS nvarchar)
WHEN Month( @dtCurrDate) IN (7,8,9,10,11,12) THEN 'Month ' + CAST(MONTH(@dtCurrDate)-7+1 AS nvarchar)
END as month_of_semester_name
--day of the semester
,CASE WHEN Month( @dtCurrDate) IN (1,2,3,4,5,6) THEN DATEDIFF(DD,'1/1/'+Convert(NVARCHAR,YEAR(@dtCurrDate)),@dtCurrDate)+1
WHEN Month( @dtCurrDate) IN (7,8,9,10,11,12) THEN DATEDIFF(DD,'7/1/'+Convert(NVARCHAR,YEAR(@dtCurrDate)),@dtCurrDate)+1
END as day_of_the_semester
,'Day '+CASE WHEN Month( @dtCurrDate) IN (1,2,3,4,5,6) THEN Cast(DATEDIFF(DD,'1/1/'+Convert(NVARCHAR,YEAR(@dtCurrDate)),@dtCurrDate)+1 as nvarchar)
WHEN Month( @dtCurrDate) IN (7,8,9,10,11,12) THEN Cast(DATEDIFF(DD,'7/1/'+Convert(NVARCHAR,YEAR(@dtCurrDate)),@dtCurrDate)+1 as nvarchar)
END as day_of_the_semester_name

SELECT @dtCurrDate = DATEADD(d, 1, @dtCurrDate)

SELECT * INTO datetemp FROM @datedim

SELECT * FROM datetemp

Sunday, February 13, 2011

SSRS Security

How can you manage permissions in SQL Server Reporting Services to ensure that sensitive reports are only seen by the people authorized to view them?
Because much of a company's data is sensitive and should not be seen by everyone, data protection is a priority. In this document, we will cover the SSRS security model and talk about how you can leverage it to lock down your SSRS environment.

SQL Server Reporting Services security is managed on two levels: the SSRS site and on items within the site. At each level, the tasks that users can perform are managed via roles. Roles are just groups with certain tasks assigned to them and members of the role can perform the assigned tasks. It's that simple. Taking some time to create appropriate roles and assigning users accordingly will ensure that your SSRS site, and the reports it houses, are secure.

Site-level security in Reporting Services
The first level of security is site-level security. On the SSRS site as a whole, you can manage the tasks your users are allowed to perform. The tasks are fixed, and you need to create your roles with these fixed tasks in mind.

At this level, you will be assigning "administrative" tasks to users. Standard users that just need to view reports will probably not require any permission at this level of SQL Server Reporting Services, beyond the ability to view properties and schedules and execute report definitions.

If you want to add additional users or groups to the System Administrators role without adding to the local administrators group, you’ll need to go into the System Role Assignments page. To do this interactively you can navigate to the report manager (http://reportserver/reports, for instance) and click Site Settings in the upper right hand of the screen. Towards the bottom of the Site Settings screen you’ll find three links under a security heading.

Click on “New Role Assignment”
From here you can enter a username or a group name into the textbox. For machines in a domain, you can prefix the name with the domain name, (DOMAIN\scott, for example). Reporting Services will verify the entry, so don’t worry about spelling mistakes. Select the checkboxes for the role you want to assign.
Here are the tasks you can assign to roles at the site level:
• Execute Report Definitions allows a user to run a report definition without first loading the definition onto the SSRS server. This is required if you want your users to run report definitions from applications separate from SSRS, such as the Report Builder.
• Generate Events allows applications to generate events in the SSRS namespace.
• Manage Jobs permits users to view the jobs that are running on your SSRS server and cancel them if necessary.
• Manage Report Server Properties is a task that allows users to manage properties of the report server, as well as the items managed by the report server.
• Manage Report Server Security allows users to view and modify the members of your system-level roles.
• Manage Roles permits users to create, view and modify the role definitions. These users can change the tasks that are assigned to your roles.
• Manage Shared Schedules. SSRS contains shared schedules that can be tied to report execution; this task will allow users to manage these schedules.
• View Report Server Properties lets users view, but not change, the properties of the report server. This task is implied if the role has the Manage Report Server Properties task assigned.
• View Shared Schedules lets users view, but not change, the shared schedules on the report server. This task is implied if the role has the Manage Shared Schedules task assigned.
To control which users are allowed to perform each of these tasks, you first create a system-level role. There are two built in system-level roles when you install SQL Server Reporting Services.
• System Administrator. This system-level role manages all aspects of the SSRS site. The only task that cannot be performed by default is the Generate Events task. If you want administrators, or any user for that matter, to be able to perform this task, you must explicitly assign it to the appropriate role.
• System User. These users are allowed to view report server properties and shared schedules and execute report definitions. These tasks are assigned so that users can run reports.
To create new roles, click Site Settings in the top right corner of the Report Browser and then select Configure system-level role definitions under Security. This will open the System Roles page: To create a new role, click the New Role button. This will open the New System Role page, shown in the screenshot. All you have to do now is name the role; give it a description, and then select all the tasks you want this role to be able to perform. When you're done, click OK.

Here are the tasks that can be assigned to item-level roles:
• Consume Reports allows users to read report definitions. This is a fancy way of saying "these users can run reports."
• Create Linked Reports allows users to create links between columns from one report to another. Users can also publish these reports to a folder.
• Manage All Subscriptions is a task that permits the user to view and manage other user's subscriptions to an item.
• Manage Data Sources lets the user create and delete shared data sources in SSRS.
• Manage Folders allows users to create and delete folders in SSRS. They can also modify the properties of existing folders.
• Manage Individual Subscriptions permits the user to create, view and modify subscriptions the user owns.
• Manage Models is a task giving the user rights to create, view and modify models.
• Manage Reports allows users to create and delete reports.
• Manage Resources lets users create, modify and delete resources in a folder. Resources are items such as shared schedules.
• Set Security for Individual Items is a task that permits the user to manage security for reports, folders, resources and shared data sources.
• View Data Sources lets the user view the properties of shared data sources.
• View Folders allows a user to view folders and folder properties.
• View Models authorizes users to view models and model properties.
• View Reports lets a user view reports in the folder hierarchy. This does not, however, allow users to run reports -- for that they require the Consume Reports task.
• View Resources lets a user view resources and resource properties in folders.
As with system-level roles, there are some built-in item-level roles you can use when assigning permissions in this area of SQL Server Reporting Services. If these roles aren't enough, you can build additional roles and assign users any combination of the item-level tasks we just looked at. The built-in item-level roles are as follows:
• Browser. This role is configured to allow users to view folders and reports and lets them subscribe to reports.
• Content Manager. These users can manage the content of the SSRS site, including managing folders, reports and resources.
• My Reports. Users are allowed to publish reports and manage reports, folders and resources in their My Reports folder.
• Publisher. This user can publish reports and manage reports, folders and resources on the report server.
• Report Builder. Report Builders have permission to view report definitions.
Now, let's get down to the details. To manage these roles, you have several options. Item-level security can be applied to a folder, report, data source or resource. To give SQL Server users permission to an item, you need to open that item and view its security properties. When you add a user, you also have to assign a user to a role for that item. In the case of folders, the role a user is assigned at the top-level folder will, by default, be inherited by other items inside that folder. You do have the ability to override security on a lower-level folder of item-level security.
At this point, the security of your SSRS server is entirely up to you. You can create different folders for each department and assign only employees in that department with access to that folder. Within each department folder, I like to create an additional folder for sensitive reports and further lock that folder down to the appropriate users. Take some time and really plan out how your reports will be placed on the server and how you want the security to look. When using SQL Server Reporting Services, there is no reason that all of your reports, regardless of sensitivity, can't be stored in a single report server.

Performance Point Services 2010

PerformancePoint Services is a performance management service that you can use to monitor and analyze your business or you can say that, PerformancePoint Services is Microsoft’s dashboard delivery tool, which now is part of the SharePoint Server 2010 Enterprise platform. PerformancePoint Services enables you to create rich, context-driven dashboards that aggregate data and content to provide a complete view of how your business is performing at all levels.

It provides you with flexible and easy-to-use tools for building Key Performance Indicators (KPIs), Scorecards, Analytic Charts and Grids, Reports, Filters and Dashboards. Each of these components is unique to PerformancePoint Services and provides functionality that interacts with a server component that handles the hard parts like data connectivity and security.

As you know that In SharePoint Server 2010 services are no longer contained within a Shared Service Provider (SSP) instead you can create a service application for each service and can share them with various existing web applications. To understand better, PerformancePoint Services will be one of the Services that will stay in the application server with a database something like “PerformancePoint Services Service database” in the Sql server box.

What is Secure Store Service: – This service application stores the password for the PerformancePoint Services unattended account.

* With PerformancePoint Services, the dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities.
* The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards.
* You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features.
* Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant.
* Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page.
* Easier to author and publish dashboard items by using Dashboard Designer.
* The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server.
* Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting.

PerformancePoint Dashboard Designer is the design tool you will use to build key performance indicators (KPIs), indicators, scorecards, reports, filters, data sources, and dashboards. It also enables you to deploy your finished dashboards to SharePoint.

To get started Open the Dashboard Designer. To do these follow the Steps below
Go to the Program Files->SharePoint-->Performance Point Dashboard Designer

Run Dashboard Designer This will download and install the PerformancePoint Dashboard Designer to your workstation.
Once the executable file is downloaded and installed on your computer, the PerformancePoint Dashboard Designer appears.

Once the Dashboard Designer is installed, you have an empty workspace or else create a new workspace. A workspace is a primary container for all of the elements that you can use to build your dashboard, and it keeps its content synched with the site from which it was launched.

Create Your Workspace -> Create Your Data Source Connection

In PerformancePoint Services you must create a connection to the data source or sources you want to use in your dashboard. All data used in PerformancePoint Services is external data, living in data repositories outside of PerformancePoint. After you establish a data connection, you will be able to use the data in the various PerformancePoint feature areas.

Creating a Data Source
To create a Data Source in the Dashboard Designer click the Create tab then click Data Source

Click the Analysis Services template; the Analytic Grid Report requires a Data Source type of SQL Server Analysis Services (SSAS).

Let’s the types of Databases that you can use and there feature
1. SharePoint Lists – Data contained in a SharePoint List on a SharePoint Site can be used in PerformancePoint Services by creating a SharePoint List data source in Dashboard Designer. Please Note that the data from SharePoint Lists can only be read. Modification to SharePoint List data must be done from SharePoint.

2. Excel Services – Data in Excel files published to Excel Services on a SharePoint Site can be used in PerformancePoint Services by creating an Excel Services data source. Supported published data can only be read in PerformancePoint Services. Published parameter values can be modified from the Dashboard Designer. If you use an Excel Services parameter in calculating a KPI, it is easy to make additional changes. PerformancePoint Services supports the following Excel Services components: Named Ranges, Tables and Parameters.

3. Excel workbooks – You may use the content of an actual Excel file as a data source in PerformancePoint Services by creating an Excel Workbook data source connection and select only the data that is to be used. The original Excel file will be independent from the PerformancePoint copy. PerformancePoint Services 2010 supports Excel 2007 and Excel 2010 workbooks as data sources.

4. SQL Server tables – You can create a data source connection to a SQL Server database and use the data within PerformancePoint Services. SQL tables and SQL views are supported data sources within PerformancePoint Services.

5. Analysis Services – Use data residing in a SQL Server Analysis Services cube in PerformancePoint Services by creating a data connection to the source. PerformancePoint Services lets you map the desired time dimension and the required level of detail for its hierarchies to the internal PerformancePoint Services Time Intelligence.

After selecting Analysis Services as the Data Source type, provide a name for the Data Source. If you enter a display folder location, a folder will be added under Data Sources in the Workspace Browser and the new Data Source will be placed into that folder. Checking Grant read permission to all authenticated users is usually a good idea; otherwise you can fine tune the security in a later step.

After clicking Finish on the Create a Data Source dialog, a tabbed dialog is displayed to define the Data Source. The Editor tab is used to specify the connection settings for the SSAS database and cube.

The Properties tab allows you to specify some general properties, your own custom properties, and permissions. There are two roles in the permissions - Editor and Reader. An Editor can change the Data Source; a Reader cannot.

Publishing the element takes its definition in the Dashboard Designer workspace file and saves it to the PPS Monitoring server database where it becomes available for use. There are several options for publishing:
• Right click on the element in the Workspace Browser and select Publish from the popup menu
• Select the element in the Workspace Browser and click the Publish Item icon on the Home tab
• Click the Publish All icon on the Home tab

Creating KPI –
Now that we have our connection ready lets create our key performance indicator (KPI). In order to create a new KPI to track whatever you want to track for your company, you need to follow these steps:
1. Right – click the PerformancePoint Content folder and select New ➪ KPI. OR select from the tab.

2. In the Select a KPI Template dialog, select Blank KPI, and then click OK.

3. And once you have your KPI created, you can define your actual and Target values. Also, select the data source and the measure.

Select Appropriate measure

Select Data Source

4. Select the Target row, and click the Set Scoring Pattern and Indicator button in the Thresholds area.

5. Next, In the Edit Binding Settings dialog select the first option (Band by normalized value of Actual/Target) from the Banding method drop – down, and then click above.

6. In the Select an Indicator step, select an indicator to use for the target that clearly shows whether the goal is met. You can choose from a collection of indicator templates available in PerformancePoint Dashboard Designer. Once you are done, click next.

7. In the last step of the wizard, leave the value intact and click Finish.

8. Save the KPI by right – clicking it in the Workspace Browser, and then selecting Save.

Creating a Scorecard
Scorecards are becoming an integral part of many BI applications. They offer an easy way for a business user to see how the business is doing. Scorecards can be developed to measure the performance of many different parts of the business such as the total company, divisions, departments, etc. In each case the scorecard clearly enumerates the objectives, targets, and actual results of the particular area of the business. The power of the scorecard is its ability to present this information in a way that allows us to focus on whether the actual business results are meeting our stated goals and objectives.
To create a Scorecard in the Dashboard Designer click the Create tab then click Scorecard.
For our example we are going to build a scorecard that uses data from the sample AdventureWorks cube that comes with SQL Server Analysis Services 2005. Select Analysis Services as the Template and make sure that "Use wizards to create scorecards" is checked. When you are getting started with developing scorecards it's a good idea to use the wizard provided by the Dashboard Designer.

After checking the Wizard option, you will be guided through a sequence of dialogs to define the scorecard. First you will name the scorecard. Check Grant read permission to all authenticated users if you want everyone to be able to view the scorecard; you can also fine tune security later if necessary.

Select AdventureWorks for the Data Source. A scorecard can contain KPIs from multiple data sources; however, the wizard only allows you to select a single data source. After you complete the wizard you can add KPIs from other data sources if necessary.

The Analysis Services template allows you to create KPIs from measures in the cube or import KPIs already defined in the cube. For our example we will create KPIs from measures in the cube. Defining KPIs in the cube is a best practice that you should consider; however, it does require a good understanding of MDX.

After deciding to create KPIs you will have the opportunity to choose measures from the cube. Each time you click the Add KPI button, another measure will be added to the list below. Keep clicking the Add KPI button until there are no more measures to add. You can delete measures that you don't want by highlighting the row(s) then clicking the Delete Selected KPIs button.

Now that we have created a sample scorecard, our final step is to create a dashboard and add our scorecard to it. Make sure that you have published the scorecard and all of the KPIs.

The scorecard shows actual results, targets and a visual indicator which clearly shows whether we are meeting our objectives. The indicator above is the default stoplight that comes with PPS. Green means we are on target, yellow means we are slightly off target, and red means we are way below the target. The indicators allow us to quickly spot the problem areas.

Friday, February 11, 2011

How to check duplicate entries in the table

ORDER BY WeekKey) AS Row_Count
FROM DimWeek

select * from Ranks where Row_Count>1

Monday, February 7, 2011

Dynamic Cascading Parameters in SSRS


DECLARE @sqlCommand nvarchar(1000)='use ' + @databases+' SELECT name FROM sys.tables order by 1'
EXECUTE sp_executesql @sqlCommand

DECLARE @sqlCommand nvarchar(1000)='use ' + @databases+' Select sys.columns.Name from sys.columns Inner Join sys.objects
On sys.objects.object_id = sys.columns.object_id
AND = '''+@Tables +''''
EXECUTE sp_executesql @sqlCommand