Tuesday, May 31, 2011

What all things to know in SQL Server

SQL Profiler
Database Engine Tuning Advisor
Display Estimated Execution Plan
Include Actual Execution Plan
Include Client Statistics
Results to --> Text, Grid and File
Query options
Design Query in Designer

Template Explorer
Maintenance Plans
SQL Server Agent (Jobs)

Attach/Detach Database
Take offline
Import/Export Data
Reports--> (Standard and Custom)
Generate Script (with and Without data)
Database Options

Design Database
Stored Procedure
Queries (Sub query, corelated and dynamic SQL)
Temp tables
Table type Variables

Tuning Stored Procedure

Subscription of Reports through SharePoint

A subscription is a standing request to deliver a report at a specific time or in response to an event, and then to have that report presented in a way that you define. Subscriptions provide an alternative to running a report on demand. On-demand reporting requires that you actively select the report each time you want to view the report. In contrast, subscriptions can be used to schedule and then automate the delivery of a report.

Now I want to create subscription on the some report.
Click on the right side down arrow of that report and click on Manage subscription.
You will get new screen.

Email Subscription:
After that click on Add Email Subscription from above screen then new screen will appears which is self-explanatory.(To, CC, BCC, Priority, Comments etc...)

If you want to schedule the time click on Configure button you will see new screen which is also self-explanatory.(Frequency, Schedule and Start/End Dates)

After scheduling click ok to save all changes then close the subscription. Now your subscription is created. You will find the new screen after creating subscription.

Monday, May 30, 2011

Top 10 SQL Server Integration Services Best Practices

SSIS is an in-memory pipeline, so ensure that all transformations occur in memory.
The purpose of having Integration Services within SQL Server features is to provide a flexible, robust pipeline that can efficiently perform row-by-row calculations and parse data all in memory.
While the extract and load phases of the pipeline will touch disk (read and write respectively), the transformation itself should process in memory. If transformations spill to disk (for example with large sort operations), you will see a big performance degradation. Construct your packages to partition and filter data so that all transformations fit in memory.
A great way to check if your packages are staying within memory is to review the SSIS performance counter Buffers spooled, which has an initial value of 0; above 0 is an indication that the engine has started swapping to disk. For more information, please refer to Something about SSIS Performance Counters.

Plan for capacity by understanding resource utilization.
SQL Server Integration Services is designed to process large amounts of data row by row in memory with high speed. Because of this, it is important to understand resource utilization, i.e., the CPU, memory, I/O, and network utilization of your packages.
CPU Bound
Seek to understand how much CPU is being used by Integration Services and how much CPU is being used overall by SQL Server while Integration Services is running. This latter point is especially important if you have SQL Server and SSIS on the same box, because if there is a resource contention between these two, it is SQL Server that will typically win – resulting in disk spilling from Integration Services, which slows transformation speed.
The perfmon counter that is of primary interest to you is Process / % Processor Time (Total). Measure this counter for both sqlservr.exe and dtexec.exe. If SSIS is not able to drive close to 100% CPU load, this may be indicative of:
• Application contention: For example, SQL Server is taking on more processor resources, making them unavailable to SSIS.
• Hardware contention: A common scenario is that you have suboptimal disk I/O or not enough memory to handle the amount of data being processed.
• Design limitation: The design of your SSIS package is not making use of parallelism, and/or the package uses too many single-threaded tasks.
Network Bound
SSIS moves data as fast as your network is able to handle it. Because of this, it is important to understand your network topology and ensure that the path between your source and target have both low latency and high throughput.
The following Network perfmon counters can help you tune your topology:
Network Interface / Current Bandwidth: This counter provides an estimate of current bandwidth.
Network Interface / Bytes Total / sec: The rate at which bytes are sent and received over each network adapter.
Network Interface / Transfers/sec: Tells how many network transfers per second are occurring. If it is approaching 40,000 IOPs, then get another NIC card and use teaming between the NIC cards.
These counters enable you to analyze how close you are to the maximum bandwidth of the system. Understanding this will allow you to plan capacity appropriately whether by using gigabit network adapters, increasing the number of NIC cards per server, or creating separate network addresses specifically for ETL traffic.
I/O Bound
If you ensure that Integration Services is minimally writing to disk, SSIS will only hit the disk when it reads from the source and writes to the target. But if your I/O is slow, reading and especially writing can create a bottleneck.
Because tuning I/O is outside the scope of this technical note, please refer to Predeployment I/O Best Practices. Remember that an I/O system is not only specified by its size ( “I need 10 TB”) – but also by its sustainable speed (“I want 20,000 IOPs”).
Memory bound
A very important question that you need to answer when using Integration Services is: “How much memory does my package use?”
The key counters for Integration Services and SQL Server are:
Process / Private Bytes (DTEXEC.exe) – The amount of memory currently in use by Integration Services. This memory cannot be shared with other processes.
Process / Working Set (DTEXEC.exe) – The total amount of allocated memory by Integration Services.
SQL Server: Memory Manager / Total Server Memory: The total amount of memory allocated by SQL Server. Because SQL Server has another way to allocate memory using the AWE API, this counter is the best indicator of total memory used by SQL Server. To understand SQL Server memory allocations better, refer to Slava Ok’s Weblog.
Memory / Page Reads / sec – Represents to total memory pressure on the system. If this consistently goes above 500, the system is under memory pressure.

Baseline source system extract speed.
Understand your source system and how fast you extract from it. After all, Integration Services cannot be tuned beyond the speed of your source – i.e., you cannot transform data faster than you can read it.
Measure the speed of the source system by creating a very simple package reading data from your source with the a destination of “Row Count”:
Execute the package from the command line (DTEXEC) and measure the time it took for it to complete its task. Use the Integration Services log output to get an accurate calculation of the time. You want to calculate rows per second:
Rows / sec = Row Count / TimeData Flow
Based on this value, you now know the maximum number of rows per second you can read from the source – this is also the roof on how fast you can transform your data. To increase this Rows / sec calculation, you can do the following:
• Improve drivers and driver configurations: Make sure you are using the most up-to-date driver configurations for your network, data source, and disk I/O. Often the default network drivers on your server are not configured optimally for the network stack, which results in performance degradations when there are a high number of throughput requests. Note that for 64-bit systems, at design time you may be loading 32-bit drivers; ensure that at run time you are using 64-bit drivers.

• Start multiple connections: To overcome limitations of drivers, you can try to start multiple connections to your data source. As long as the source can handle many concurrent connections, you may see an increase in throughput if you start several extracts at once. If concurrency is causing locking or blocking issues, consider partitioning the source and having your packages read from different partitions to more evenly distribute the load.

• Use multiple NIC cards: If the network is your bottleneck and you’ve already ensured that you’re using gigabit network cards and routers, then a potential solution is to use multiple NIC cards per server. Note that you will have to be careful when you configure multiple NIC environments; otherwise you will have network conflicts.

Optimize the SQL data source, lookup transformations, and destination.
When you execute SQL statements within Integration Services (as noted in the above Data access mode dialog box), whether to read a source, to perform a look transformation, or to change tables, some standard optimizations significantly help performance:
• Use the NOLOCK or TABLOCK hints to remove locking overhead.
• To optimize memory usage, SELECT only the columns you actually need. If you SELECT all columns from a table (e.g., SELECT * FROM) you will needlessly use memory and bandwidth to store and retrieve columns that do not get used. .

• If possible, perform your datetime conversions at your source or target databases, as it is more expensive to perform within Integration Services..

• In SQL Server 2008 Integration Services, there is a new feature of the shared lookup cache. When using parallel pipelines (see points #8 and #10 below), it provides a high-speed, shared cache.

• If Integration Services and SQL Server run on the same server, use the SQL Server destination instead of the OLE DB destination to improve performance..

• Commit size 0 is fastest on heap bulk targets, because only one transaction is committed. If you cannot use 0, use the highest possible value of commit size to reduce the overhead of multiple-batch writing. Commit size = 0 is a bad idea if inserting into a Btree – because all incoming rows must be sorted at once into the target Btree—and if your memory is limited, you are likely to spill. Batchsize = 0 is ideal for inserting into a heap. For an indexed destination, I recommend testing between 100,000 and 1,000,000 as batch size.

• Use a commit size of <5000 to avoid lock escalation when inserting; note that in SQL Server 2008 you can now enable/disable lock escalation at the object level, but use this wisely. • Heap inserts are typically faster than using a clustered index. This means that you may want to drop indexes and rebuild if you are changing a large part of the destination table; you will want to test your inserts both by keeping indexes in place and by dropping all indexes and rebuilding to validate.. • Use partitions and partition SWITCH command; i.e., load a work table that contains a single partition and SWITCH it in to the main table after you build the indexes and put the constraints on.. • Another great reference from the SQL Performance team is Getting Optimal Performance with Integration Services Lookups.

Tune your network.
A key network property is the packet size of your connection. By default this value is set to 4,096 bytes. This means a new network package must be assemble for every 4 KB of data. As noted in SqlConnection.PacketSize Property in the .NET Framework Class Library, increasing the packet size will improve performance because fewer network read and write operations are required to transfer a large data set. If your system is transactional in nature, with many small data size read/writes, lowering the value will improve performance. Since Integration Services is all about moving large amounts of data, you want to minimize the network overhead. This means that the value 32K (32767) is the fastest option. While it is possible to configure the network packet size on a server level using sp_configure, you should not do this. The database administrator may have reasons to use a different server setting than 32K. Instead, override the server settings in the connection manager as illustrated below. Another network tuning technique is to use network affinity at the operating system level. At high throughputs, you can sometimes improve performance this way. For the network itself, you may want to work with your network specialists to enable jumbo frames to increase the default payload of 1,500 bytes to 9,000 bytes. By enabling jumbo frames, you will further decrease the amount of network operation required to move large data sets.

Use data types – yes, back to data types! –wisely.
Of all the points on this top 10 list, this is perhaps the most obvious. Yet, it is such an important point that it needs to be made separately. Follow these guidelines:
• Make data types as narrow as possible so you will allocate less memory for your transformation.
• Do not perform excessive casting of data types – it will only degrade performance. Match your data types to the source or destination and explicitly specify the necessary data type casting..
• Watch precision issues when using the money, float, and decimal types. Also, be aware the money is faster than decimal, and money has fewer precision considerations than float.

Change the design.
There are some things that Integration Services does well – and other tasks where using another tool is more efficient. Your tool choice should be based on what is most efficient and on a true understanding of the problem. To help with that choice, consider the following points:
• Do not sort within Integration Services unless it is absolutely necessary. In order to perform a sort, Integration Services allocates the memory space of the entire data set that needs to be transformed. If possible, presort the data before it goes into the pipeline. If you must sort data, try your best to sort only small data sets in the pipeline. Instead of using Integration Services for sorting, use an SQL statement with ORDER BY to sort large data sets in the database – mark the output as sorted by changing the Integration Services pipeline metadata on the data source.
• There are times where using Transact-SQL will be faster than processing the data in SSIS. As a general rule, any and all set-based operations will perform faster in Transact-SQL because the problem can be transformed into a relational (domain and tuple) algebra formulation that SQL Server is optimized to resolve. Also, the SQL Server optimizer will automatically apply high parallelism and memory management to the set-based operation – an operation you may have to perform yourself if you are using Integration Services. Typical set-based operations include: o Set-based UPDATE statements – which are far more efficient than row-by-row OLE DB calls.
o Aggregation calculations such as GROUP BY and SUM. These are typically also calculated faster using Transact-SQL instead of in-memory calculations by a pipeline.
• Delta detection is the technique where you change existing rows in the target table instead of reloading the table. To perform delta detection, you can use a change detection mechanism such as the new SQL Server 2008 Change Data Capture (CDC) functionality. If such functionality is not available, you need to do the delta detection by comparing the source input with the target table. This can be a very costly operation requiring the maintenance of special indexes and checksums just for this purpose. Often, it is fastest to just reload the target table. A rule of thumb is that if the target table has changed by >10%, it is often faster to simply reload than to perform the logic of delta detection.

Partition the problem.
One of the main tenets of scalable computing is to partition problems into smaller, more manageable chunks. This allows you to more easily handle the size of the problem and make use of running parallel processes in order to solve the problem faster.
For ETL designs, you will want to partition your source data into smaller chunks of equal size. This latter point is important because if you have chunks of different sizes, you will end up waiting for one process to complete its task. For example, looking at the graph below, you will notice that for the four processes executed on partitions of equal size, the four processes will finish processing January 2008 at the same time and then together continue to process February 2008. But for the partitions of different sizes, the first three processes will finish processing but wait for the fourth process, which is taking a much longer time. The total run time will be dominated by the largest chunk.
To create ranges of equal-sized partitions, use time period and/or dimensions (such as geography) as your mechanism to partition. If your primary key is an incremental value such as an IDENTITY or another increasing value, you can use a module function. If you do not have any good partition columns, create a hash of the value of the rows and partition based on the hash value. For more information on hashing and partitioning, refer to the Analysis Services Distinct Count Optimization white paper; while the paper is about distinct count within Analysis Services, the technique of hash partitioning is treated in depth too.

Some other partitioning tips:
• Use partitioning on your target table. This way you will be able to run multiple versions of the same package, in parallel, that insert data into different partitions of the same table. When using partitioning, the SWITCH statement is your friend. It not only increases parallel load speeds, but also allows you to efficiently transfer data. Please refer to the SQL Server Books Online article Transferring Data Efficiently by Using Partition Switching for more information.

• As implied above, you should design your package to take a parameter specifying which partition it should work on. This way, you can have multiple executions of the same package, all with different parameter and partition values, so you can take advantage of parallelism to complete the task faster.

• From the command line, you can run multiple executions by using the “START” command. A quick code example of running multiple robocopy statements in parallel can be found within the Sample Robocopy Script to custom synchronize Analysis Services databases technical note.

Minimize logged operations.
When you insert data into your target SQL Server database, use minimally logged operations if possible. When data is inserted into the database in fully logged mode, the log will grow quickly because each row entering the table also goes into the log.
Therefore, when designing Integration Services packages, consider the following:
• Try to perform your data flows in bulk mode instead of row by row. By doing this in bulk mode, you will minimize the number of entries that are added to the log file. This reduction will improve the underlying disk I/O for other inserts and will minimize the bottleneck created by writing to the log.

• If you need to perform delete operations, organize your data in a way so that you can TRUNCATE the table instead of running a DELETE. The latter will place an entry for each row deleted into the log. But the former will simply remove all of the data in the table with a small log entry representing the fact that the TRUNCATE occurred. In contrast with popular belief, a TRUNCATE statement can participate in a transaction.

• Use the SWITCH statement and partitioning. If partitions need to be moved around, you can use the SWITCH statement (to switch in a new partition or switch out the oldest partition), which is a minimally logged statement.

• Be careful when using DML statements; if you mix in DML statements within your INSERT statements, minimum logging is suppressed.

Schedule and distribute it correctly.
After your problem has been chunked into manageable sizes, you must consider where and when these chunks should be executed. The goal is to avoid one long running task dominating the total time of the ETL flow.
A good way to handle execution is to create a priority queue for your package and then execute multiple instances of the same package (with different partition parameter values). The queue can simply be a SQL Server table. Each package should include a simple loop in the control flow:
1. Pick a relevant chunk from the queue:
1. “Relevant” means that is has not already been processed and that all chunks it depends on have already run.
2. If no item is returned from the queue, exit the package.
2. Perform the work required on the chunk.
3. Mark the chunk as “done” in the queue.
4. Return to the start of loop.
Picking an item from the queue and marking it as “done” (step 1 and 3 above) can be implemented as stored procedure, for example.
The queue acts as a central control and coordination mechanism, determining the order of execution and ensuring that no two packages work on the same chunk of data. Once you have the queue in place, you can simply start multiple copies of DTEXEC to increase parallelism.

Sunday, May 29, 2011

SSIS Package Deployment (which one is the best)

File system:
1. If the system is secure (no access to every one) then File system is the best.
2. We can set dynamic source (connection manager strings) without using config file. 3. Good to have if we are having all environments are similar (same setup)
4. Easy to deploy and maintain.
5. If we have any assembly used in the package then file system is better.
6. After modification just need to keep the package on the specific location.
7. Good to opt this if we have lot of packages to deploy.

SQL Server:
1. We can deploy two packages with the same name with different ID which creates problem to execute the package
2. Best from security point of view (until access no one can even execute the package)
3. Little tricky to set dynamic source path (connection manager settings) so need config file for that.
4. Only security is the one option which is good here other than this no more benefit.
5. Need to build/deploy every time before adding into SQL agent.
6. If the job is on different server and packages are on different server then need to create proxy then it runs the job and execute the packages.
7. Good if we have less no of packages (less than 10)

Conclusion: Overall I feel File system is the good option if we have secure environment.

Wednesday, May 25, 2011

Flat Files Archiving in SSIS

In Microsoft Visual C# 2008:

2 Parameters need to declare:
FlatFilePath: String Type (To hold the URL)
SourceFileDir: String Type (To hold the Source folder URL)

using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
using System.IO;

namespace ST_f580a072cab2401eab429eeca3b97e74.csproj
[System.AddIn.AddIn("ScriptMain", Version = "1.0", Publisher = "", Description = "")]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase

#region VSTA generated code
enum ScriptResults
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure

public void Main()
int len = Dts.Variables["FlatFilePath"].Value.ToString().Length;
string ActiveDir = Dts.Variables["FlatFilePath"].Value.ToString().Substring(0,len -1);
string NewPath = System.IO.Path.Combine(ActiveDir, "ArchFolderName" + string.Format("_{0:yyyy-MM-dd_hh-mm-ss}", DateTime.Now));


DirectoryInfo dirSrc = new DirectoryInfo(Dts.Variables["NewSourceFileDir"].Value.ToString());
DirectoryInfo dirDest = new DirectoryInfo(NewPath + "\\");
FileInfo[] Files = dirSrc.GetFiles();
if (Files.Length > 0)
foreach (FileInfo aFile in Files)
if (File.Exists(dirDest + aFile.Name))
File.Delete(dirDest + aFile.Name);
aFile.MoveTo(dirDest + aFile.Name);
Dts.TaskResult = (int)ScriptResults.Success;

Flat File Existence Check in SSIS

In Microsoft Visual Basic 2008:

Need to declare 2 variables
bolFileExists Boolean type
FlatFilePath string type (to store the flat file path)

Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Runtime
Imports System.IO

System.AddIn.AddIn("ScriptMain", Version:="1.0", Publisher:="", Description:="") _
System.CLSCompliantAttribute(False) _
Partial Public Class ScriptMain
Inherits Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase

Enum ScriptResults
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
End Enum

Public Sub Main()

Dim fileLoc, fileName As String
If Dts.Variables.Contains("User::FlatFilePath") = True Then
fileLoc = CStr(Dts.Variables("User::FlatFilePath").Value)
fileName = "FolderName\TxtFileName.txt"
If File.Exists(fileLoc + fileName) Then
Dts.Variables.Item("User::bolFileExists").Value = True
Dts.Variables.Item("User::bolFileExists").Value = False
End If
Dts.TaskResult = ScriptResults.Success
Dts.TaskResult = ScriptResults.Failure
End If
End Sub

End Class

Monday, May 23, 2011

Configuration of Performance Point Services on SharePoint 2010

To work on PPS firstly install the Dashboard Designer
PerformancePoint Dashboard Designer is a tool that you can use to create dashboards, scorecards, and reports and publish them to a SharePoint site. Dashboard Designer is part of PerformancePoint Services in Microsoft SharePoint Server 2010.
You use PerformancePoint Dashboard Designer to create, edit, and publish dashboards to Microsoft SharePoint Server 2010.
If PerformancePoint Services in Microsoft SharePoint Server 2010 is configured for your organization and you have the necessary permissions, you can install or launch Dashboard Designer. The best way to open Dashboard Designer for the first time is to use your web browser to install it from the Business Intelligence Center.
Important We recommend that you use Internet Explorer to launch Dashboard Designer for the first time. After that, you can launch Dashboard Designer by using the Start menu (Click Start > All Programs > SharePoint > PerformancePoint Dashboard Designer).
• If you are using Mozilla Firefox as your Web browser, make sure that Microsoft .NET Framework Assistant 1.1 is installed on your computer before you attempt to install Dashboard Designer.
• If you are using Safari as your Web browser, make sure to use either Internet Explorer or Firefox to install Dashboard Designer for the first time. Dashboard Designer does not launch from the Business Intelligence Center when you use the Safari browser.
The Business Intelligence Center is a pre-built Web site, or site template, that is optimized to help you manage the working elements of business intelligence (BI) reporting: scorecards, dashboards, data connections, status lists, status indicators, and so on. You can customize a Business Intelligence Center site as much as you like, or you can just start by using its built-in tools.
If you don’t have web application created follow below steps:
1. Firstly create the web application or use the already created web application and create a site collection Template with Business Intelligence Center.
Navigate to Central Administration Manage Application Management [Above Ribbon] New Web Application
After the first steps follow below steps:
If you already have web application created follow below steps:
1. Navigate to Central Administration Create Site Collection
a) Specify the in which Web application you want to create site collection
b) select Template Enterprise Business Intelligence Center
c) Specify the User who has admin rights

2. After that you will find the page (Business Intellengence Center)

Now it’s time to download the designer
3. In the Business Intelligence Center, click a section, such as Monitor Key Performance, Build and Share Reports, or Create Dashboards. Additional information about each area is displayed in the center pane: Click a link that says, “Start using PerformancePoint Services.”

4. The PerformancePoint site template opens. Click Run Dashboard Designer.
5. If you have the necessary permissions assigned to you in SharePoint Server, Dashboard Designer will install and open.

After you have opened Dashboard Designer at least once, you can open it by using additional methods, such as the following:
• Set up a shortcut on your desktop to the SharePoint site that contains your dashboard items. Then, double-click the desktop shortcut to open Dashboard Designer.
• Click Start, and then click All Programs. Locate SharePoint, and then click PerformancePoint Dashboard Designer.
Now Test whether you are able to communicate with Data Source and create the PPS components. Follow below steps:
Try to create new data source and see whether you are able to connect to particular database.
If not you will find error:

Now See whether the services is running on SharePoint
1. Navigate to Central Administration Applications Management [Services Application ] :Manage Services on server
a) See that Performance Point Services and Secure Stored Services should be started.
2. Navigate to Central Administration Applications Management [Services Application ] :Manage Services Application
a) See that Performance Point Services and Secure Stored Services and their proxy account should be started.
b) If No proxy account is there for above to services then need to create it by navigating to above ribbon New Performance Point Service Application or Secure Stored Services Application
c) See that both application services & there proxy account should be started.
3. Navigate to Central Administration Applications Management [Services Application ] :Configure Service Application Association
a) See that your web application should have Performance Point Service & Secure Stored Service Proxy account with default proxy group
Here’s the fix:
4. Navigate to Central Administration Manage Service Applications Click on Secure Store Service

5. Click Generate New Key from the ribbon
6. Type in a pass phrase and click OK
After this below message will appear
There are no Secure Store Target Applications in this Secure Store Service Application. You can create a new Target Application from the Manage Target Applications group in the Edit ribbon group
7. Create a Target application
a) Give ID, Display name, Contact email , target application type- individual
b) Specify the username & password depends on the admin rights
c) Specify the credential that as admin rights to allow access.
8. Go to the Target id and set the credentials

Set the credentials
9. Navigate to Central Administration Manage Service Applications PerformancePoint Service Application Performance Point Service Application Settings
10. Configure the unattended service account
11. Start using Dashboard Designer
12. Test whether the data source should connect to the appropriate database

Thursday, May 19, 2011

Deploying SSRS reports through rs.exe utility

The rs utility is a script host that can be used to perform scripted operations. This utility is used to run Microsoft Visual Basic scripts that copy data between report server databases, create items in the report server database, publish reports, etc. The report servers that are configured for SharePoint integration mode do not support the rs utility.
rs {-?}
{-i input_file=}
{-s serverURL}
{-u username}
{-p password}
{-e endpoint}
{-l time_out}
{-b batchmode}
{-v globalvars=}
{-t trace}

(Optional) Displays the syntax of rs arguments.
-i input_file
(Required) Specifies the .rss file to execute. This value can be a relative or fully qualified path to the .rss file.
-s serverURL
(Required) Specifies the Web server name and report server virtual directory name to execute the file against. An example of a report server URL is http://examplewebserver/reportserver. The prefix http:// or https:// at the beginning of the server name is optional. If you omit the prefix, the report server script host tries to use https first, and then uses http if https does not work.
-u [domain\]username
(Optional) Specifies a user account used to connect to the report server. If -u and -p are omitted, the current Windows user account is used.

-p password
(Required if -u is specified) Specifies the password to use with the -u argument. This value is case-sensitive.
(Optional) Specifies the SOAP endpoint against which the script should run. Valid values are the following:
• Mgmt2010
• Mgmt2006
• Mgmt2005
• Exec2005
If a value is not specified, the Mgmt2005 endpoint is used. Note that the Mgmt2000 value is discontinued in this release and is no longer a valid value. For more information about the SOAP endpoints and deprecated features
-l time_out
(Optional) Specifies the number of seconds that elapse before the connection to the server times out. The default is 60 seconds. If you do not specify a time-out value, the default is used. A value of 0 specifies that the connection never times out.
(Optional) Specifies that the commands in the script file run in a batch. If any commands fail, the batch is rolled back. Some commands cannot be batched, and those run as usual. Only exceptions that are thrown and are not handled within the script result in a rollback. If the script handles an exception and returns normally from Main, the batch is committed. If you omit this parameter, the commands run without creating a batch.
(Optional) Specifies global variables that are used in the script. If the script uses global variables, you must specify this argument. The value that you specify must be valid for global variable defined in the .rss file. You must specify one global variable for each –v argument.
The -v argument is specified on the command line and is used to set the value for a global variable that is defined in your script at run time. For example, if your script contains a variable named parentFolder, you can specify a name for that folder on the command line:
rs.exe -i myScriptFile.rss -s http://myServer/reportserver -v parentFolder="Financial Reports"
Global variables are created with the names given and set to the values supplied. For example, -v a="1" -v b="2" results in a variable named a with a value of "1" and a variable b with a value of "2".
Global variables are available to any function in the script. A backslash and quotation mark (\") is interpreted as a double quotation mark. The quotation marks are required only if the string contains a space. Variable names must be valid for Microsoft Visual Basic; they must start with alphabetical character or underscore and contain alphabetical characters, digits, or underscores. Reserved words cannot be used as variable names.
(Optional) Outputs error messages to the trace log. This argument does not take a value.
Note: To run the tool, you must have permission to connect to the report server instance you are running the script against. You can run scripts to make changes to the local computer or a remote computer. To make changes to a report server installed on a remote computer, specify the remote computer in the -s argument.
RS.exe is located at \Program Files\Microsoft SQL Server\100\Tools\Bin. You can run the utility from any folder on your file system.
Using the RS.EXE utility:
• It is a Command-line utility.
• Used for performing Deployment and Administrative tasks programmatically.
• Executes the RSS file. [Reporting Services Script file]
• Hope you created sample report here my report name is MyReport.rdl
• Create .rss file
o Open notepad and copy the below code and save it as .rss file
o Inside the code make appropriate changes i.e. DestinationFolder, ReportSource in file.

Dim definition As [Byte]() = Nothing
Dim warnings As Warning() = Nothing
Dim DestinationFolder As String = "MyReports"
Dim parentPath As String = "/" + DestinationFolder
Dim ReportSource As String

Public Sub Main()

'Location of the rdl file
ReportSource = "C:\Reports"

rs.Credentials = System.Net.CredentialCache.DefaultCredentials
'get particular rdl file
If rs.GetItemType("/MyReports") = Microsoft.SqlServer.ReportingServices2005.ItemTypeEnum.Folder Then
End If
'creates folder on report server
rs.CreateFolder("MyReports", "/", Nothing)
Console.WriteLine("Parent folder [MyReports] created successfully.")
'method called to create datasource used in the report
'CreateDataSource(name, extension, connectionstring)
CreateReportDataSource("MyDataSource", "SQL", "Data Source=(local);Initial Catalog=AdventuresWorks")
'method called to deploy the report

Console.WriteLine("Tasks completed successfully.")

Catch ex As Exception
Throw ex
End Try
End Sub

Public Sub CreateReportDataSource(ByVal name As String, ByVal extension As String, ByVal connectionString As String)
'Data source definition.
Dim definition As New DataSourceDefinition()
definition.CredentialRetrieval = CredentialRetrievalEnum.Integrated
definition.ConnectString = connectionString
definition.Enabled = True
definition.Extension = extension

rs.CreateDataSource(name, "/MyReports", False, definition, Nothing)
Console.WriteLine("Data source: {0} created successfully.", name)
Catch e As Exception
Console.WriteLine("ERROR creating data source: " + name)
Throw e
End Try

End Sub

Public Sub PublishReport(ByVal reportName As String)
Dim stream As FileStream = File.OpenRead(ReportSource + "\" + reportName + ".rdl")
definition = New [Byte](stream.Length) {}
stream.Read(definition, 0, CInt(stream.Length))

rs.CreateReport(reportName, parentPath, False, definition, Nothing)
Console.WriteLine("Report: {0} published successfully.", reportName)

Catch e As Exception
Console.WriteLine("ERROR while Publishing report: " + reportName)
Throw e
End Try
End Sub

The sample RSS file creates a data-source, which points to the Northwind database on the (local) sql server instance using Windows authentication. The function used is: CreateDataSource(name, extension, connectionstring)

CreateDataSource("MyDataSource", "SQL", "Data Source= (local);Initial Catalog=Northwind")
It also publishes a sample report using the CreateReport function.
CreateReport(NameofReport, ParentFolder, Overwrite, ReportDefinition, Properties)

CreateReport(MyReport, /MyReports, False, definition, Nothing)
If report contains any external images, then it has to be also deployed as a resource. This is not required if the image is embedded in the report. The function used is: CreateResource(ImageName, ParentFolder, Overwrite, Definition, MIME type, Properties)

CreateResource(ImageName, parentPath, False, definition, "image/jpeg",Nothing)
For executing the rss file, open the command prompt and set the directory to D:\MyReports and run the following command:
rs -i MyRSS.rss -s http://localhost/ReportServer
Now your .rss file is created then placed this file to appropriate folder. i.e. C:/Users/Users_name/your rss file
• Open Command prompt Runcmd
• Run below cmd
rs -i MyRSS.rss -s http://localhost/reportserver

After successful execution, MyReports folder will be created on the report server and Mydatasource will get created and report will be deployed. It can be viewed using the Report Manager.

Monday, May 2, 2011

10-Minute Video Introduction to Scrum