Thursday, October 28, 2010

How to set SSRS Subscriptions

Reports can be pulled from the Report Manager by finding the report the user wants to have. The way to deliver reports at a specific time, in a specific format to the user is to use subscriptions. A subscription is a standing request to deliver a report at a specific time or in response to an event, and then to have that report presented in a way that you define. Subscriptions provide an alternative to running a report on demand. On-demand reporting requires that you actively select the report each time you want to view the report. In contrast, subscriptions can be used to schedule and then automate the delivery of a report.
Report subscriptions are processed on the Report Server. They reach end users through delivery extensions that are deployed on the Report Server.

There are two kinds of subscriptions:
• Standard /Event-Driven Subscriptions
• Data-Driven Subscriptions
Standard subscriptions are created and managed by individual users. A standard subscription consists of static values that cannot be varied during subscription processing.
Data-driven subscriptions are dynamic in that the presentation, delivery, and parameter values are retrieved at run time from a data source. You might use data-driven subscriptions if you have a very large recipient list or if you want to vary report output for each recipient. To use data-driven subscriptions, you must have expertise in building queries and an understanding of how parameters are used. Report server administrators typically create and manage these subscriptions
Subscriptions use delivery extensions to determine how to distribute a report and in what format. When a user creates a subscription, he or she can choose one of the available delivery extensions to determine how the report is delivered. Reporting Services includes support for e-mail delivery and delivery to a file share. Developers can create additional delivery extensions to route reports to other locations. Another delivery method is called the null delivery provider. This method is not available to users. Null delivery is used by administrators to improve report server performance by preloading the cache. Subscription information is stored with individual reports in a report server database. You cannot manage subscriptions separately from the report to which they are associated.

E-Mail Delivery in Reporting Services:
Reporting Services includes an e-mail delivery extension that provides a way to e-mail a report to individual users or groups. The e-mail delivery extension is configured through the Reporting Services Configuration Manager tool and by editing the Reporting Services configuration files.

Select the Email Settings--> Sender Address, Current SMTP Delivery Method and SMTP Server.

To distribute or receive a report by e-mail, you define either a standard subscription or a data-driven subscription.
Note: You can subscribe to or distribute only one report at a time. You cannot create a subscription that delivers multiple reports in a single e-mail message. You click the “New Subscription” button, ready to go add an email address.

An E-Mail will be generated and sent to the email addresses found in the recipients’ field.
To:, Cc: and Bcc: – fairly standard fields. You can enter the alias if you’re sending to an internal address, and must type the full email address if sending to an external address.

Reply-To: – The email will be sent from the address specified in SQL Server Reporting Services Configuration Manager on the server. However, if you would like replies to be redirected, simply enter a different address. I have a generic email ,, that all reports are sent from.

Subject: – what will appear in the subject line of the email. This can be edited to anything you would like. The two parameters provided, @ReportName and @ExecutionTime, are the only parameters that are allowed in this field.

Include Report – This option, if checked, will attach the report in the format you specify. Consider the size of the report, and the format you want to send it in, when choosing this option. If you’re going to be sending a large report, consider mail size limits on the recipient’s mail server. You may need to send a link, instead.

Render Format – If you have chosen to include the report, you can pick a format here. The formats you can pick from vary based on your version of Reporting Services.
XML file with report data – the user receives an .xml file as an attachment.
CSV (comma delimited) – the user receives a .csv file as an attachment.
• Acrobat (PDF) file – the user receives a .pdf file as an attachment.
MHTML (web archive) – the user receives the report embedded in the body of the email, rather than as an attachment. (I was really excited when I discovered this!)
Excel – the user receives an .xls file as an attachment.
TIFF file – the user receives a .tiff file as an attachment.
Word – the user receives a .doc file as an attachment. This is new in 2008.

Include Link – This option, if checked, will embed a link to the report in the body of the email. Note that if you do this, the report parameters will be embedded in the link.
Priority: – This sets a priority of High, Normal or Low for Outlook. High is the little red exclamation point, and Low is the little blue down arrow.

Comment: – This field controls what will be in the body of the email. Comments will appear above the report, if you selected MHTML, and above the link, if you chose to include it.
Schedule the subscription by clicking Schedule Details: Here you can select hourly, daily, weekly, monthly and many more options. Ok and here now your subscription is created. Wait for the particular time which you schedule.

Another place where you can see the subscription delivery is in the Microsoft SQL Server Management Studio under SQL Server Agent under Jobs Activity monitor: Connect to SQL Server --> SQL Server Agent -->Job Activity

Windows File Share Delivery in Reporting Services:
Reporting Services includes a file share delivery extension so that you can deliver a report to a folder. The file share delivery extension is available by default and requires no additional configuration. To distribute a report to a file share, you define either a standard subscription or a data-driven subscription. You can subscribe to and request delivery for only one report at a time. When defining a subscription that uses file share delivery, you must specify an existing folder as the target folder. The report server does not create folders on the file system. The folder that you specify must be accessible over a network connection. When specifying the target folder in a subscription, use Uniform Naming Convention (UNC) format that includes the computer's network name. Do not include trailing backslashes in the folder path. The example illustrates a UNC path: \\ShareFolderName

File Name: – This is the name the file will be saved as. Just as a note, you can’t use the same parameters that you can with email. I’ve tested it, and they just appear as @ReportName and @TimeExecuted in the file name. You can, however, use the parameter @timestamp. This will add the time the report was saved to the file name. This is useful for creating uniquely named files.
Add a file extension when the file is created – If this is selected, the file will be assigned a 3-letter extension corresponding to the file format you select. If this is not selected, the 3-letter extension isn’t assigned to the file, although it will still open in the selected format.

Path: – The location on the network the file will be saved to. The folder must already exist – Reporting Services won’t create it. Also, you must use the UNC format: file://servername/foldername/subfolder.

Render Format: – The file format the report will be saved in. The formats you can pick from vary based on your version of Reporting Services.
XML file with report data – the report is saved as an .xml file.
CSV (comma delimited) – the report is saved as a .csv file.
• Acrobat (PDF) file – the report is saved as a .pdf file.
HTML 4.0 – the report is saved as an .html file. Note: if there are images in your report, they will not show in this format.
MHTML (web archive) – the report is saved as a .mhtml file.
Excel – the report is saved as an .xls file.
RPL Renderer – the report is saved as an .rpl file. What is .rpl? RPL (Report Page Layout Format) is a rending format used in newer Reporting Services version for forcing the client to share some of the workload of rendering viewer controls. This enhances performance in the loading process of reports
TIFF file – the report is saved as a .tif file.
Word – the report is saved as a .doc file.
Credentials used to access the file share: – This must be a network account, with access to the folder you are saving the file to.

Overwrite options: – These are options for what will happen if the report has previously been saved to the network location.

Overwrite an existing file with a newer version – Just what it says! If a file with the same name and extension exists in that folder, it will be overwritten. If I save “Sales Report.xls” at 10:00 AM on Monday, then save “Sales Report.xls” at 10:00 AM on Tuesday, only the file from Tuesday will exist.

• Do not overwrite the file if a previous version exists – Again, this is just what it says! If a file with the same name and extension exists in that folder, the file will not save. If you look at the report subscription Status, you will see the message, “A file named “ReportName” already exists in the destination folder. To continue, do one of the following: remove the existing file, specify a different name, or set overwrite options to allow the replacement of the existing file with a newer”.

• Increment file names as newer versions are added – The first time the file is saved, it is as “Report Name.doc”. The next time, it is “Report Name_1.doc”, then “Report Name_2.doc”, etc. Another way to achieve the same effect is to use the @timestamp parameter in the File Name, and set the Overwrite option to “Do not overwrite…”
Schedule the subscription by clicking Schedule Details: Here you can select hourly, daily, weekly, monthly and many more options.

After the scheduled time expires verify the status as you did in the email delivery example. Also go and verify that the report has arrived in the shared folder.


A data-driven subscription provides a way to use dynamic subscription data that is retrieved from an external data source at run time. A data-driven subscription can also use static text and default values that you specify when the subscription is defined. You can use data-driven subscriptions to do the following:
• Distribute a report to a fluctuating list of subscribers. For example, you can use data-driven subscriptions to distribute a report throughout a large organization where subscribers vary from one month to the next, or use other criteria that determines group membership from an existing set of users.
• Filter the report output using report parameter values that are retrieved at run time.
• Vary report output formats and delivery options for each report delivery.
Data-driven subscriptions send out reports to not just one individual or two but to a large number of recipients whose desired format (among other details) are maintained in a database on the server. This is report delivery targeted at mass distribution. In order to work with this hands-on exercise you need to have a database of recipients ready. When the subscription is processed, the Report Server customizes the output for each of the recipients maintained on the database.
Note: To use Data Driven Subscriptions, you must have Evaluation, Developer or Enterprise version of SQL Server. This feature is not supported in Standard or Express.

Create a Subscription database in SQL Server 2008
Before you can send out the email to a number of recipients you need to have information about their email addresses in the correct format in your SQL Server database which stores subscriber information
Connect to SQL Server Management StudioCreate New database Subscribers

CREATE TABLE [dbo].[RecipientInfo]
[SubscriptionID] [int] NOT NULL PRIMARY KEY,
[RecipientID] [int] ,
[Email] [nvarchar] (50) NOT NULL,
[FileType] [bit],
[Format] [nvarchar] (20) NOT NULL ,

INSERT INTO [dbo].[RecipientInfo] (SubscriptionID, RecipientID,
Email, FileType, Format) VALUES ('101', '289', '', '1', 'Excel')
INSERT INTO [dbo].[RecipientInfo] (SubscriptionID, RecipientID, Email, FileType, Format) VALUES ('201', '284', '','2', 'Word')
INSERT INTO [dbo].[RecipientInfo] (SubscriptionID, RecipientID, Email, FileType, Format) VALUES ('301', '275',','3', 'PDF')

Creating the data-driven subscription by email
Open a report and go to the Subscriptions tab. Click “New Data-driven Subscription”.

Step 1
The first step is to establish a description, delivery method and data source for this subscription.
Description – provide a name, which you will see on the Subscriptions tab for the report. I usually put in something that tells me how often this subscription is run and who or where is it being delivered to.
Specify how recipients are notified – This will determine how the report is delivered.
E-mail – The report will be delivered via email.
Specify a data source that contains recipient information:
Specify a shared data source – choose a data source that exists in the Reporting Services installation. I always use this option, as it makes administration much easier.
Specify for this subscription only – you can set up a data source for this item only.

Step 2
If you chose to use a shared data source, you will see this screen, where you can navigate your folders to find your data source.

Note: - If you get an error saying could not valid then go back and uncheck the use windows credential box or check your connection and credentials.

Step 4
In this step, you will specify your email delivery information.
To – who the email will be delivered to. Set a static email, or choose a value from your query.
Cc – who the email will be cc’ed to. Set a static email, or choose a value from your query.
Bcc – who the email will be bcc’ed to. Set a static email, or choose a value from your query.
Reply to – by default, replies will go the account the reports are sent from. You can set a different email address here. Set a static email, or choose a value from your query.
Include Report – you can set a static or dynamic option of True or False.
Render Format – you can set a static option, get the value from your query, or choose no format.
Priority – this sets the priority in Outlook. You can set a static option, get the value from your query, or choose no value.
Subject – the subject line of the email. You can set a static subject, or get the value from your query.
Comment – this will appear in the body of the email. You can set a static option, get the value from your query, or choose no value.
• Include link – determines if you want to include a link to the report in the body of the email. You can set a static option, get the value from your query, or choose no value.
There are so many creative ways to use this! Perhaps you want the report to have a Priority of High if sales are above or below a certain dollar value. The Subject could be dependent on any field in the query. Perhaps you don’t include the report and the link if certain criteria are not met.

Step 5
If your report includes parameters, you will set them here.

Step 6:
The final step is to define when the subscription will be processed.
• When the report data is updated on the report server – the subscription will be processed when the snapshot is refreshed, if the report is set up to use a snapshot.
• On a schedule created for this subscription – create a schedule for this one report subscription.
• On a shared schedule – use a pre-defined schedule, created in Site Settings.

Your data Driven Subscription got created and waits for to completion of the schedule and see the results that it should send a mail to all recipient which we stored in database.

Null Delivery Provider – The What and Why
A feature of Reporting Services that can be used to improve performance is report caching. To sum up what could be a future post of its own: caching will save a copy of the report in the ReportServerTempDB, which reduces the time it takes for a report to render for a user. This can be very useful if you have a report that takes a long time to generate, and is run frequently with no parameters or the same parameters.
When you set up a report to use caching, the cache is created the first time a user runs it. You set the cache to expire after a certain amount of time. So, say you have a report that takes 5 minutes to generate, and 10 users will run it every day, with the first one coming in at 7:00 AM. You set the report to cache a temporary copy, and expire it every day at 7:00 PM. The first user would come in and run it, and it would take 5 minutes. Every other user that runs it between the time of the first user and the expiration would notice a significant decrease in rendering time.
But, how can you help the users by reducing the time it takes the first user to run it in the morning?
The answer is to set up a report subscription using the Null Delivery Provider. This saves a copy of the report in cache at a time you specify – in this example, perhaps 6:00 AM. When the first user comes in to run the report, the rendering time is reduced.

Data-driven Subscriptions by Null Delivery Provider
Open a report and go to the Subscriptions tab. Click “New Data-driven Subscription”.

Step 1
The first step is to establish a description, delivery method and data source for this subscription.
Description – provide a name, which you will see on the Subscriptions tab for the report. I usually put in something that tells me how often this subscription is run and who or where is it being delivered to.
Specify how recipients are notified – This will determine how the report is delivered.
Choose Null Delivery Provider
Specify a data source that contains recipient information:
Specify a shared data source – choose a data source that exists in the Reporting Services installation. I always use this option, as it makes administration much easier.
Specify for this subscription only – you can set up a data source for this item only.

Step 2
If you chose to use a shared data source, you will see this screen, where you can navigate your folders to find your data source. I usually store mine in the “Data Sources” folder – that makes them easy to find!

If you choose to specify a data source for this subscription only, you will see this screen. Enter a connection string and credentials for your data source.

Step 3
In this step, you define the query that can return parameter values for you.
When using caching for a report, be aware that it works best when you have no parameters, or very few choices. Every distinct combination of parameters will create a different version of the report in cache.
I am going to set my start date, end date, and folder parameters for this report in this step.
You have the option to specify a time-out, in seconds.
You can click the Validate button to ensure the query will run against your data source.

Step 4
There are no delivery settings for Null Delivery Provider. The report is always “delivered” to ReportServerTempDB.

Step 5
If your report includes parameters, you will set them here. You can set a static value for each, or, as with the case below, choose a value from the query.

Step 6
The final step is to define when the subscription will be processed.
When the report data is updated on the report server – the subscription will be processed when the snapshot is refreshed, if the report is set up to use a snapshot.
On a schedule created for this subscription – create a schedule for this one report subscription.
On a shared schedule – use a pre-defined schedule, created in Site Settings.
One thing to keep in mind with Null Delivery is that the copy of the report in cache must be expired before you create a new copy using this method. I would suggest setting up the report cache to expire on a specific schedule, shortly before this subscription runs.

This explanation of Null Delivery Provider is just one piece of report caching. Caching can be a way to increase performance for your users, if used correctly. Consider using both caching and data-driven subscriptions to improve your environment.

Creating a Deployment Manifest

SSIS - Creating a Deployment Manifest:

Using a Deployment Manifest in SSIS allows you to deploy a set of packages to a target location using a wizard for installing your packages. The benefit to using it is the nice user interface that a wizard provides. The disadvantage of using it is it’s lack of flexibility. When using the Deployment Manifest keep in mind that it’s an all or nothing deployment, which means you cannot choose to just deploy one or two packages at a time from your SSIS project. Using the Deployment Manifest will deploy all your packages that are in your project. Creating the manifest file is a very simple process that I will walk you through.

Creating a Deployment Manifest

With the Solution Explorer open Right-Click on the project name and click Properties
This will open the Project Properties Pages
Select the Deployment Utility from the page
Change the CreateDeploymentUtility property to True
After you’ve gone through these steps the next time you build your project it will create the file (YourProjectName).SSISDeploymentManifest. This file is located in the same folder as your packages in the bin\Deployment folder.

If you run this file it will open the Package Installation Wizard that will allow you to deploy all your packages that were located in the project to a desired location.

Wednesday, October 27, 2010

What is Agile

Twelve principles:

1. Customer satisfaction by rapid delivery of useful software
2. Welcome changing requirements, even late in development.
3. Working software is delivered frequently (weeks rather than months)
4. Working software is the principal measure of progress
5. Sustainable development, able to maintain a constant pace
6. Close, daily cooperation between businesspeople and developers.
7. Face-to-face conversation is the best form of communication (co-location)
8. Projects are built around motivated individuals, who should be trusted
9. Continuous attention to technical excellence and good design
10. Simplicity
11. Self-organizing teams
12. Regular adaptation to changing circumstances

There are many specific agile development methods. Most promote development, teamwork, collaboration, and process adaptability throughout the life-cycle of the project.
Agile methods break tasks into small increments with minimal planning, and do not directly involve long-term planning. Iterations are short time frames (timeboxes) that typically last from one to four weeks. Each iteration involves a team working through a full software development cycle including planning, requirements analysis, design, coding, unit testing, and acceptance testing when a working product is demonstrated to stakeholders. This minimizes overall risk and allows the project to adapt to changes quickly. Stakeholders produce documentation as required. An iteration may not add enough functionality to warrant a market release, but the goal is to have an available release (with minimal bugs) at the end of each iteration. Multiple iterations may be required to release a product or new features.

Team composition in an agile project is usually cross-functional and self-organizing without consideration for any existing corporate hierarchy or the corporate roles of team members. Team members normally take responsibility for tasks that deliver the functionality an iteration requires. They decide individually how to meet an iteration's requirements.

Agile methods emphasize face-to-face communication over written documents when the team is all in the same location. Most agile teams work in a single open office (called a bullpen), which facilitates such communication. Team size is typically small (5- 12 people) to simplify team communication and team collaboration. Larger development efforts may be delivered by multiple teams working toward a common goal or on different parts of an effort. This may require a coordination of priorities across teams. When a team works in different locations, they maintain daily contact through videoconferencing, voice, e-mail, etc.

No matter what development disciplines are required, each agile team will contain a customer representative. This person is appointed by stakeholders to act on their behalf and makes a personal commitment to being available for developers to answer mid-iteration problem-domain questions. At the end of each iteration, stakeholders and the customer representative review progress and re-evaluate priorities with a view to optimizing the return on investment (ROI) and ensuring alignment with customer needs and company goals.

Most agile implementations use a routine and formal daily face-to-face communication among team members. This specifically includes the customer representative and any interested stakeholders as observers. In a brief session, team members report to each other what they did the previous day, what they intend to do today, and what their roadblocks are. This face-to-face communication exposes problems as they arise.

Agile development emphasizes working software as the primary measure of progress. This, combined with the preference for face-to-face communication, produces less written documentation than other methods. The agile method encourages stakeholders to prioritize wants with other iteration outcomes based exclusively on business value perceived at the beginning of the iteration.

Comparison with other methods:
Agile methods are sometimes characterized as being at the opposite end of the spectrum from "plan-driven" or "disciplined" methods. This distinction may be misleading, as agile methods are not necessarily "unplanned" or "undisciplined". Agile teams may employ highly disciplined formal methods. A more accurate distinction is that methods exist on a continuum from "adaptive" to "predictive". Agile methods lie on the "adaptive" side of this continuum.

Adaptive methods focus on adapting quickly to changing realities. When the needs of a project change, an adaptive team changes as well. An adaptive team will have difficulty describing exactly what will happen in the future. The further away a date is, the more vague an adaptive method will be about what will happen on that date. An adaptive team can not report exactly what tasks are being done next week, but only which features are planned for next month. When asked about a release six months from now, an adaptive team may only be able to report the mission statement for the release, or a statement of expected value vs. cost.

Predictive methods, in contrast, focus on planning the future in detail. A predictive team can report exactly what features and tasks are planned for the entire length of the development process. Predictive teams have difficulty changing direction. The plan is typically optimized for the original destination and changing direction can require completed work to be started over. Predictive teams will often institute a change control board to ensure that only the most valuable changes are considered.

Formal methods, in contrast to adaptive and predictive methods, focus on computer science theory with a wide array of types of provers. A formal method attempts to prove the absence of errors with some level of determinism. Some formal methods are based on model checking and provide counter examples for code that cannot be proven. Generally, mathematical models (often supported through special languages see SPIN model checker) map to assertions about requirements. Formal methods are dependent on a tool driven approach, and may be combined with other development approaches. Some provers do not easily scale. Like agile methods, manifestos relevant to high integrity software have been proposed in Crosstalk.

Agile methods:
Well-known agile software development methods include:
1. Agile Modeling
2. Agile Unified Process (AUP)
3. Dynamic Systems Development Method (DSDM)
4. Essential Unified Process (EssUP)
5. Extreme Programming (XP)
6. Feature Driven Development (FDD)
7. Open Unified Process (OpenUP)
8. Scrum
9. Velocity tracking

For More Details please visit:

Monday, October 25, 2010

What is SQL Azure

–Providing IT resources, as a service, in a dynamic and scalable manner over a network

•What does this mean?
–Access from any device (PC, Phone, Multi-Function Devices, TV…)
–Hosted centrally managed software and data
–Unlimited processing, CPU, storage, memory, data centers
–IT on demand

•Five essential characteristics of the Cloud:
–On-demand self-service
–Broad network access
–Resource pooling
–Rapid elasticity
–Measured service

SQL Azure is the first cloud service completely supporting the relational database model.
SQL Azure is fully committed to supporting T-SQL, SQL query, stored procedures, data views, and so on.
SQL Azure supports the traditional Windows user/password security model.
The first release of SQL Azure is almost fully compatible with all existing relational development tools and IDEs, such as SQL Server Management Studio (SSMS) and Visual Studio.
SQL Azure allows all SQL developers and IT staff to seamlessly migrate from an on-premises environment to a cloud environment with almost no learning curve. All knowledge and skills for SQL developers and IT staff can be applied to SQL Azure. This should remove a big concern from when the Azure framework was announced in October 2008 regarding potential job losses for the IT industry (worries brought on because the infrastructure and data storage hardware for an organization no longer need to exist in the organization).

SQL Azure supports PHP, which makes SQL Azure more friendly to Internet applications.
The most important thing is that SQL Azure is the first platform supporting the rational database model and database as a service running in the cloud. The major changes are not in the physical domain but in the logical virtual domain.

Since SQL Server and T-SQL have been in the market for decades, there is a lot of information that can be found on them. Please bear in mind that SQL Azure does not support object browsing from SQL Server Management Studio. To connect to the SQL Azure database in the cloud using SQL Server Management Studio, launch SQL Server Management Studio.
The authentication type that should be selected is SQL Server Authentication. The login name and password are the same as the ones you used to redeem your invitation code.

The connection to SQL Azure will be automatically deactivated if there is no activity detected by SQL Azure. The error message will be thrown. A custom table can be created in the SQL Azure cloud database by using SQL Server Management Studio in a way that is pretty similar to creating a database on a server running in an on-premises environment. One difference, for example, is that the USE statement is not supported when using SQL Azure since we have used New Query to connect to the specific database already. If you need to access a different database, you can start a new query by clicking the New Query button.
Note: Since SQL Azure does not support object browsing from SQL Server Management Studio. Convert UDDTs to Base Type: This option needs to be set to true since SQL Azure does not support userdefined types. They need to be converted into underlying SQL Azure portable types.

Script extended properties:
This option needs to be set to false since SQL Azure does not support extended properties.

Script USE DATABASE: This option needs to be set to false since SQL Azure does not support the USE statement.

Script Data: This option needs to be set to false since we do not care about the data at this moment. If you run the generated script without any modification, then you will get an error message. Two radio buttons can be found to the left of the button Test Connection, which allows users to quickly switch the connection back and forth between the SQL Azure cloud service and on-premises SQL workstation environments. There are two ways to execute a script. One way is to click the Execute button, and the other way is to select and highlight the text and press F5. If there is a script selected, the caption of the Execute button turns to Execute Selected, and the background color of the button changes,The caption of the Execute button turns to Execute Selected, and the background color turns to gold. Azure table storage supports the Language Integrated Query (LINQ) extensions for .NET 3.0 or later versions, ADO.NET Data Services, and representational state transfer (REST), which allows applications developed using non-.NET languages to access table storage via the Internet. There is no limit on the number of tables and entities or on the table size. There is no need for table storage developers to handle the data access transactions, optimistic concurrency for updates, and deletes. Also, there is no need for developers to worry about performance, since cloud storage is highly scalable. Especially for long queries or queries that encounter a time-out, partial results will be returned and the unfinished queries can be continued with a return continuation token.

Challenges Facing Enterprise IT:
Provisioning, deploying and managing servers at scale
Enabling faster, more efficient development of applications with existing knowledge and toolsets
Reducing IT hardware and infrastructure costs

SQL Azure Database: (The first and only true relational database as a service)

Easy provisioning and deployment
Auto high-availability and fault tolerance
Self-maintaining infrastructure; self-healing
No need for server or VM administration

Elastic Scale:
Database utility; pay as you grow
Flexible load balancing
Business-ready SLAs
Enable multi-tenant solutions
World-wide presence

Developer Agility:
Build cloud-based database solutions on consistent relational model
Leverage existing skills through existing ecosystem of developer and management tools
Explore new data application patterns

Challenges Today:
Promotions, events, ticket selling businesses are “bursts – bound” by nature
Capacity constraints limit business agility
High costs of entry into new business
Difficult to roll out extra capacity quickly
Idle capacity “off-bursts” is cost prohibitive

Solution (SQL AZURE):
Elastic scale – database as a service
Pay as you grow and shrink
Easy to provision and manage database
No hardware, no manual database administration required

Best Integration – Office 2010 & Other Tools.
Focus on combining the best features of SQL Server running at scale with low friction.
Highly scaled out relational database as a service

Relational database service
SQL Server technology foundation
Highly symmetrical
Highly scaled
Database “as a Service” – beyond hosting

Customer Value Props
Self-provisioning and capacity on demand
Symmetry on-premises database platform
Automatic high-availability and fault-tolerance
Automated DB maintenance (infrastructure)
Simple, flexible pricing – “pay as you grow”

Application Topologies:
SQL Azure access from within MS Datacenter (Azure compute) -- Also Know as "Code Near"
SQL Azure Access from outside MS Datacenter (On-premises) -- Also Know as "Code Far"
SQL Azure Access from within and outside MS Datacenter (On-premises & Azure Compute) -- Also Know as "Hybrid"

Partitioning, when do I need it?: (Concept of X (Storage), Y (Transactions) and Z axis from Low to high)
Single Database, No Partitioning
Partitioned Data. Partitioning Based on Application Requirements (Storage)
Partitioned Data, Partitioning based on Application Requirements (IOPS)
Partitioned Data, Partitioning based on Application Requirements (IOPS, Storage or both)

Developing on a local SQL Express instance has some advantages
Easy to get started, you already know how to do it!
Full fidelity with the designer and debugging tools
Reduces latency when using local Azure development tools
Reduces bandwidth and databases costs for development
Some caveats
Remember to alter your VS build settings to switch the connection string when deploying
Use tools (like SQLAzureMW) to keep you within the supported SQL Azure features
Always test in SQL Azure before deploying to production

Connecting to SQL Azure:
SQL Azure connection strings follow normal SQL syntax
Applications connect directly to a database
“Initial Catalog = ” in connection string
No support for context switching (no USE )
Some commands must be in their own batch
Create/Alter/Drop Database & Create/Alter/Drop Login, & Create/Alter USER with FOR/FROM LOGIN
Encryption security
Set Encrypt = True, only SSL connections are supported
TrustServerCertificate = False, avoid Man-In-The-Middle-Attack!
Format of username for authentication:
User ID=user@server;Password=password;
Setup your firewall rules first!

Resilient Connection Management:
Connections can drop for variety of reasons
Idleness (greater than 30 minutes)
Long running transactions > 5 minutes
Resource Management
Database failover
Hardware failure
Load Balancing

What to do on connection failure?
Wait, then retry if it is a transient failure
Change your workload if throttled, i.e. break up your transaction

SELECT *INTO #Destination FROM Source WHERE [Color] LIKE 'Red‘
To work around this you need to create your destination table then call INSERT INTO. Here is an example:
CREATE TABLE #Destination (Id int NOT NULL, [Name] nvarchar(max), [Color] nvarchar(10))
INSERT INTO #Destination(Id, [Name], [Color])
SELECT Id, [Name], [Color] FROM Source WHERE [Color] LIKE 'Red';

SQL Azure Firewall:
What is it?
A way to restrict access to your database

How does it work?
Programmatic access
Common scenarios
I need my development machine to be able to access it
I need access from my laptop when I am not in the office
I want to restrict access to only my application running in Windows Azure

Data Access APIs:
Supported APIs:
ADO.Net .Net 3.5 SP1 and 4.0
Entity Framework .Net 3.5 SP1 and 4.0
SQL Server 2008 Driver for PHP v1.1

Connection String:
Encrypt=True and add @servername to User ID
Encrypt=yes and add @servername to Uid

TCP/IP over port 1433
Dynamic ports
Named pipes
Shared memory

Authentication Mode:
SQL Auth
Windows Auth

Feature Parity:
Administration Surface:
Physical Server Properties does not apply in SQL Azure
You have a master database but no access to server level constructs such as
sp_configure, endpoints, DBCC commands, server level DMVs and System Views.
Programmability Surface:
Certain Features are partially available today:
USE, XML processing, deprecated T-SQL etc.
List is available here;

Some features are not available today:
Full-text Search, Remote Data Access and Linked Servers, Distributed Transactions,
Change Tracking, Service Broker etc.
List is available here;

Migration Assistant for MySql and Access:
Auto porting of schema, database code and data from MySql and Access to SQL Azure
SQL Server Migration Assistant for MySql and Access
Supports MySQL 4.1 and up
Support Access v 97 and up
SQL Server versions supported (all editions)
SQL Azure, SQL Server 2005, SQL Server 2008 and 2008 R2

Transferring Large Data:
Best Practices for efficient Data movement to SQL Azure
Use the right tools is key
Network performance considerations
Latency vs Bandwidth

Use the right tools (BCP & Bulk Copy APIs):
High speed programmable data import and export
Best Practices
Optimize Databases for Data Import
Disable/Delay creation of none Clustered Indexes
Order Data on the Clustered Index
Disable Triggers and Constraint checking
-N Native mode so no data type conversion needed.
-c Character mode for interoperability
-b batchsize will let you control transaction size
-h”TABLOCK, ORDER(…)” optimize writes

Use the right tools (SSIS – Best of Breed Data Transformation Utility):
SSIS Design Surface - Data Flow Task
Diverse Source and destinations
To/From Flat Files, ADO.Net, OleDB
Fully programmable flow
Loops, Sorts, Conditional operators, XML/WebServices Processing etc
VS Debugging support with data viewers, watches and conditional breakpoints
Best Practices – Data Flow Task
Remember; Optimize Databases for Data Import
Disable/Delay creation of none Clustered Indexes
Order Data on the Clustered Index
Disable Triggers and Constraint checking
Batch Size: Adjust the transaction size
Buffer and Blob temp storage area; spread over to multiple drives for large data transfers
Parallelization – based on execution trees, task will auto parallelize

Use the right tools (Import and Export Wizard):
Simplified wizard for migrating schema and data through SSIS
Great performance out of the box
Allows ‘save as package’ for full control

To know more details please visit and watch the videos:

ReportItems keyword in SSRS

/*In Reporting Services:
As we know we can write the functions in Code Tab and if we need to call function we use code.functionname So in the following code I am passing ReportItems as a parameter where all report items will pass like tablename, header and footer objects name (in nut shell, all the objects what we have added on the report).
Then main thing to understand is ReportItems keyword. I know very few BI folks have worked in this area so this is new learning to all of you. I have recently fixed this issue in one of the project so sharing this with all of you.


public function FunctionName(Items as ReportItems) as String
if Not IsNothing(Items("Vehicle_name").Value) then
strVehiclename = Items("Vehicle_name").Value
return Items("Vehicle_name").Value
return strVehicleName
end if
end function

Facing Issue to deploy report Models or Reports from BIDS 2008

"Exception of type 'Microsoft.ReportingServices.RsProxy.AccessDeniedException' was thrown."
[Facing problem to deploy report Models or Reports from BIDS 2008]
The issue may be caused by the publisher account do not having proper permission. Could you please check if the role “Content Manager” has the permission “Manage models” and the permission “View models”?

Here are the detailed steps:
1. Open “Report Manager” using http://ServerName/reports;
2. Click “Home” Link;
3. Click "Properties Tab”;
4. Click “New Role Assignment” if your your is not there;
5. Make sure everything is selected especially “Manage models” and “ View models”;

Saturday, October 23, 2010

How to Improve Your Car’s Fuel Efficiency

1. Check Tyre Pressure
2. Lighten Your Load
3. S-L-O-W D-O-W-N
4. Do Fuel Quality/Types/Additives Help Mileage?
5. Tune Your Engine
6. Clean the Air Filters Regularly
7. Keep the Windows Closed
8. Clean Spark Plugs
9. Don’t Be a Clutch-Driver
10. Keep the Car in Showroom Condition

Before you buy a car ask yourself...

1. What kind of driving do you do? Off road? Around town?
2. What features matter to you?
[Brand, Technical Specifications, Features, Color and Comfort], Air-con? Safety? Power?
3. What's your price range?
4. Where will you be parking? If Open then Light/metallic color or if under the roof then choice is yours.
5. Do you have a garage or only on-street parking?
6. What kind of insurance can you afford?
7. What about the Horse power and BHP?
8. Is the servicing affordable?
9. If Personal then Petrol or if Commercial then Diesel.

Wednesday, October 20, 2010

What's New in SSRS 2008 R2


The new features fall into two broad categories: those that apply to a report server configured for SharePoint integration mode ("Server-related enhancements") and those related to the SQL Server 2008 R2 Reporting Services Add-in for SharePoint applications. Some features apply to both categories.

Support for Multiple SharePoint Zones
In SQL Server 2008 Reporting Services, we have new options, via the alternate access mapping functionality in the SharePoint environment, to access report server items (in previous releases, report server items were available from only the default SharePoint zone). The latest release allows us to access these items from one or more of the following SharePoint zones:
• Default
• Intranet
• Extranet
• Internet
• Custom
This can be an advantage where SharePoint environment can be accessed by users from several zones. We could, for example, employ alternate access mapping to provide access to the same report server items from our primary SharePoint site for information consumers from the intranet and Internet zones.

Support for SharePoint User Token
Using a new server interface (called IRSSetUserToken), which d├ębuts in SQL Server 2008 Reporting Services, we can use the SharePoint user token to connect to a SharePoint site, via the SharePoint Object Model, and then retrieve data from the site by using SharePoint credentials.

Claims Based Authentication
In addition to supporting the existing SharePoint User Tokens, SQL Server 2008 R2 Reporting Services in SharePoint integrated mode supports Claims Based Authentication.

Scripting with the rs Utility
Servers configured in SharePoint integrated mode are now supported using the rs utility, which can be employed to automate deployment and administration tasks.

When you use the SQL Server 2008 R2 Reporting Services Add-in for SharePoint applications, you’ll notice some improvements. In order to get the most benefit, Microsoft recommends that we use the SQL Server 2008 R2 version of the add-in with a SQL Server 2008 R2 report server.

Integration Configuration in Fewer Steps
The new SQL Server 2008 R2 Reporting Services Add-in for SharePoint applications makes it easier to integrate a SharePoint server with a Reporting Services report server. In the current version, fewer steps are required to configure report server integration within SharePoint Central Administration.

Reporting Services now provides several categories which can be configured to be a part of the SharePoint diagnostic logging. (SharePoint Foundation 2010 and SharePoint Server 2010 offer diagnostic logging to monitor activity on a SharePoint server.) The new categories are part of the 'SQL Server Reporting Services' log file area, and will log several categories of information, including the following:
• General: Logs events that involve access to the following items:
• Reporting Services Web pages
• Report Viewer HTTP handler
• Report access (.rdl files)
• Data sources (.rsds files)
• URLs on the SharePoint site (.smdl files)
• Database: Logs events that involve database access.
• Office Server General: Logs logon failures.
• Topology: Logs current user information.
• Web Parts: Logs events that involve access to the Report Viewer web part.
SQL Server 2008 R2 also makes available new categories for SharePoint diagnostic logging specifically for Reporting Services. The categories that log information with a product area of 'SQL Server Reporting Services' include the following:
• HTTP Redirector: Logs calls made by client applications to the report server.
• Configuration Pages: Logs calls from Reporting Services pages in SharePoint central administration
• UI Pages: Logs calls made by non-configuration-related pages within the SharePoint user interface.
• Soap Client Proxy: Logs calls made by the Reporting Services web pages or Report Viewer web part, to the report server.
• Local Mode Rendering: Logs call made by the Reporting Services rendering engine while in local mode.
• Local Mode Processing: Logs calls made by the Reporting Services processing engine while in local mode.

A new Microsoft SharePoint List and query experience allows us to easily bring SharePoint List information into a report when we designate a SharePoint List as a data source for reports.

Reports from Microsoft Access 2010 and the new Reporting Services SharePoint list data extension can run locally from the SharePoint document library (no connection to a SQL Server Reporting Services report server is required). In the new Local Mode we can use the Report Viewer to directly render reports from SharePoint (when the data extension supports local mode reporting).
Local mode supports rendering reports that have an embedded data source or a shared data source from an .rsds file. However, the management of the report or its associated data source is deliberately not supported in local mode, but only in connected mode.

In the latest version of Reporting Services, an HTTP redirector has been put in place to listen for client requests (such as those from Report Builder) to the SharePoint web front end. The HTTP redirector will re-direct such requests to the Report Server.

In SQL Server 2008 R2 Reporting Services, on a SharePoint server integrated with Reporting Services, report subscriptions and drill-through links work directly with the linked resources in the document library itself.

The SQL Server 2008 R2 Reporting Services related interface within SharePoint now supports 37 languages, complex text layout with Thai, and right-to-left character sets with Hebrew and Arabic, and. In addition to document library menus and user interface controls, support includes the Report Viewer web part.

SQL Server 2008 R2 Reporting Services provides new features that support enhanced collaboration and reuse of components within the reporting environment. We can classify these new features within two groups: Report Parts and Shared Datasets.

Report Parts
Report Parts allow us to employ the various strengths and roles of team members. For example, a given team member can reuse, in his or her various reports, report parts that have been created and saved by another member, say, working within a developmental capacity. Previous versions of SQL Server Reporting Services did not offer a ready means of generating report parts (such as tables, charts and parameters) that might be used pervasively throughout our business intelligence solutions. SQL Server 2008 R2 Reporting Services supports our selection of individual report parts from a given report for uploading to a library on the report server, from which we can subsequently browse and select them for inclusion in prospective reports.
The items that we can publish as Report Parts include the following:
• Charts
• Gauges
• Images and embedded images
• Lists (via the Tablix data region)
• Maps
• Matrices (via the Tablix data region)
• Parameters
• Rectangles
• Tables (via the Tablix data region)
Report Parts are stored either on a report server, or within a SharePoint site that is integrated with a report server. Report Parts can be reused in multiple reports, and they can be updated on the server.
We can reuse report parts in many reports, where each report part we add to a given report uses a unique ID to maintain a relationship to the instance of the “parent” report part on the respective server or site. Report parts that display data (for example, a matrix, table, or chart) can now be based upon a shared dataset (see the section that follows); otherwise, when a report part is published, the dataset upon which it depends is saved as an embedded dataset. Moreover, embedded datasets can be based upon embedded data sources, but credentials are not stored in embedded data sources.
Choices for the type of dataset underlying a given Report Part can have security implications: if a Report Part depends upon an embedded dataset that uses an embedded data source, for example, the credentials for the embedded data source will need to be provided by anyone reusing this Report Part. (To avoid this, we would base our embedded and shared datasets upon shared data sources with stored credentials.)

The second new type of Report Server item that we see in SQL Server 2008 R2 Reporting Services is Shared Datasets. Shared Datasets can retrieve data from shared data sources that connect to external data sources. Shared Datasets use only shared data sources (embedded data sources are not allowed). A Shared Dataset can be based on a report model, or upon any data source for a supported Reporting Services data extension.
Many of us have no doubt experienced the need to use identical datasets within reports we designed that are similar in presentation, or that use similar or identical information. (This is particularly common when the datasets act as data sources for report parameters that are common to multiple reports). Before SQL Server 2008 R2 Reporting Services, we were limited to copying such a dataset’s definition from the Report Definition Language (RDL) of one report to that of another report to allow the latter to “share” the same dataset (the Business Intelligence Development Studio did not afford us the ability to copy and paste datasets between reports). The only other alternative was to create an identical dataset from scratch within the new report.
A Shared Dataset offers numerous benefits; at the top of the list is the fact that it provides a way to share a query among many reports. This practice helps report authors and others to build reports upon consistent sets of data. The dataset query can include dataset parameters. Moreover, we can configure a Shared Dataset to cache query results for specific parameter combinations. We can perform such a cache upon first use or through the specification of a schedule. Shared Dataset caching can be used together with report data feeds and report caching to help manage access to a data source, as well as to optimize our use of system resources.
We can “share” a dataset within the Report Designer simply by selecting Convert to Shared Dataset on the context menu that appears when we right-click a pre-existing dataset within a report. Once we have converted a standard dataset to a shared dataset, we can select that dataset from among a list of any other shared datasets, within the reports of the current Report Server project, simply by ensuring the selection of the radio button labeled Use a shared dataset within the Dataset Properties dialog that appears when we go to add a new dataset in the Report Data pane, Datasets folder.
When added to a report, a Shared Dataset (like a report part) maintains a relationship to the definition of the “parent” dataset on the report server. Unlike Report Parts, when the definition is changed, we do not have to accept updates. All reports that have the relationship always use the Shared Dataset definition on the report server.

SQL Server 2008 R2 Reporting Services offers three new data sources types: Microsoft SQL Azure, Microsoft SQL Server Parallel Data Warehouse, and Microsoft SharePoint List. Each of these data sources types, together with a description of its nature and operation

SQL Server 2008 R2 Reporting Services provides three new ways to visualize data in reports: Maps, Sparklines and Data Bars, and Indicators. Let’s take a look at each of these ways, together with a description of its nature and operation, in the sections below.

The new Map Wizard and Map Layer Wizard in Report Designer allow us to add maps and map layers to our reports, an example of which we can see in Illustration 3, to help visualize data against a geographic background. Once map elements are related with report data, we can control color, size, width, or marker type on any given layer, as well as add labels, legends, titles, a color scale, and a distance scale to help assist users in interpreting the map display. Moreover, we can add interactive features, such as tooltips and drill through links, or provide parameters that enable a user to interactively control the visibility of each layer, and much more.

Sparklines and Data Bars have the basic chart elements of categories, series, and values, but they have no legend, axis lines, labels, or tick marks. Both are simple charts used to convey much information in a little space (and often in-line with text), and they can be easily used within matrices and tables.
Sparklines and Data Bars are most effective when many of them are presented together, for rapid visual comparison (making it easy to identify outliers). Each Sparkline often represents multiple data points over time. Because Sparklines display aggregated data, they must go in a cell that is associated with a group (and are not added to a detail group in a table). By contrast, Data Bars can represent multiple data points, but typically illustrate only one (with each Data Bar typically presenting a single series).

Indicators are available in Report Builder 3.0 and Report Designer, and are icon-like, minimal gauges that convey the state of a single data value at a glance. They are typically used in matrices or tables to present data in rows or columns.
Indicators are often used to reflect:
• Trends – with directional images such as arrows;
• Ratings – using incremental icons such as stars;
• States (of being) – with traffic lights or check marks.
Once we drag the indicator item from the toolbox to a report we are crafting in Report Designer, we are prompted to make a selection among various options

SQL Server 2008 R2 Reporting Services continues to add features designed to support more rendering options. Moreover, we can now use our reports as the source of data feeds and as exports to Microsoft Excel.

We can now generate the names of worksheet tabs when we export reports to Excel, using the properties of reports and page breaks in Tandem. We can provide an initial page name of a report that can be exported as the default name of worksheet tabs, or use page breaks and page names to provide different names for each worksheet tab.

Via enhancements in SQL Server 2008 R2 Reporting Services, including the new Atom rendering extension, we can export a production report as a data feed or create a report whose primary purpose is provide data, in the form of data feeds, to applications. The option to use reports as a data feed gives us an additional way to provide data to applications. We might, for example, prefer to hide the complexity of a data source and make it simpler to use the data: We might, alternatively do this when targeted data is not easy to access through client data providers.) Another benefit would be the availability of familiar Reporting Services features, such as Report Manager, security, report snapshots and scheduling, to manage the reports that now provide data feeds. Whatever the business need, the capability to render reports directly to data feeds means reusability of existing reports, as well as a means of making best use of all the effort of creating and securing the underlying datasets.

SQL Server 2008 R2 Reporting Services introduces enhancements to page breaks in several key components, including:
• Tablix data regions (table, matrix, and list)
• Groups
• Rectangles
When we delimit the pages of a report using page breaks, we can now assign different reset page numbering and page names to those pages. Furthermore, we can leverage expressions to establish the dynamic update of page names and page numbers at runtime, or the conditional (in addition to full) disablement of page breaks.
An example of the settings involved in creating a page break, based upon a group (Product Category) within a sample report

We can now rotate standalone text boxes up to 270 degrees in:
• Report headers
• Report footers
• Report body
• Cells of tables
• Cells of tables
This rotation, and the display of text written vertically and bottom to top that it supports, can help us to better use “available real estate” to create more readable reports, fit more data on printed reports, and create reports with more graphical appeal.

Aggregates, expressions and functions gain power in SQL Server 2008 R2Reporting Services. This includes:
• Enhanced expression support – New globals are introduced, including:
o OverallPageNumber and OverallTotalPages - Support for page numbering for the entire rendered report.
o PageName- Support for naming pages.
o RenderFormat - Support for information that is specified for the renderer.
A new read-only property (set by default) for report variables is introduced. This property provides a way to persist data across report rendering, page changes, and some types of user interaction. The value of the variable is set when the report is reprocessed, but is maintained in the current session. (This is the equivalent of setting the Writable property for a report variable in RDL.)
• Support for calculating aggregates of aggregates – we can now calculate aggregates of aggregates, and use this capability, as an example, to more precisely align horizontal and vertical axes for charts and scales for gauges (when nesting within a table, etc.).
• Lookup Functions – Expressions in data regions can now include references to functions which can retrieve names and values from a dataset that is not bound to the data region.

In SQL Server 2008 R2 Reporting Services both report authoring tools, Report Builder and Report Designer, been improved to make report design and creation easier and more efficient.
Enhanced Preview Experience in Report Builder 3.0
The introduction of edit sessions enables the reuse of cached datasets when previewing reports in Report Builder 3.0, resulting in quicker rendering. We can now reuse cached datasets when previewing reports, thanks to the introduction of edit sessions in Report Builder 3.0. This means not only quicker rendering, but, because edit sessions are bound to a report, we can use references to subreports, as well as relative references, in reports.

Easier Report Design and Editing
Report Builder 3.0 provides a user-interface for changing credentials when it is unable to connect to the data source. We can use the Enter Data Source Credentials dialog box to change the credentials used by Report Builder 3.0 at design time, to connect to the data source as the current Windows user, or provide a user name and password. (This is to compensate for the fact that run-time credentials, specified in the data source properties of a report, might not work for design time tasks such as creating queries and previewing reports, depending upon the setup of the local environment).

New Capabilities in the Graphical Query Designer
The new query designer provides a graphical user interface to create custom relationships between related and unrelated tables. The query designer also makes it easy to include aggregates and grouping that summarize data in queries. The graphical query designer can now be used to create queries that retrieve report data from Microsoft SQL Server, Microsoft SQL Azure, and Microsoft SQL Server Parallel Data Warehouse databases, as we noted earlier.
Similarly, the new graphical query designer for the Microsoft SharePoint List data source type simplifies the design and creation of queries that retrieve report data from SharePoint lists. The query designer lists the fields in each SharePoint list item on a SharePoint site for easy inclusion into a query; the designer also helps us to intuitively define filters within our queries to limit the amount of data returned.

SQL Server 2008 R2 Reporting Services introduces cache refresh plans, enabling us to cache shared dataset query results or reports from a schedule, or upon first use. (Previous versions allowed us to control caching at the report level only, resulting in the simultaneous caching of all datasets.) We can schedule cache refreshes through an item-specific schedule or a shared schedule, managing and scheduling report caching separately from report subscriptions. This means that we can refresh certain report components independently, and with varying frequencies, from others. Cache refresh plans for commonly used parameter combinations, as one illustration, can help improve data retrieval and report viewing response times.

Report Manager has been considerably updated in SQL Server 2008 R2 Reporting Services, to make design and creation easier. More noticeable changes in Report Manager include an enhanced layout, which provides easier navigation to manage report properties and report server items, as well as an updated color scheme. A new drop-down menu is accessible for each report or report server item in a folder, whereby we can access the various configuration options for the report or item we choose.
The Report Manager has improvements in navigation and workflow for viewing and managing reports and report server items, provided by the new drop-down menu to access various configuration options for each report or report server item in a folder. It eliminates of the need to render a report before accessing and configuring report properties when in default view. It has an updated Report Viewer toolbar, which includes some updates to the toolbar controls, as well as the ability (discussed earlier) to export report data to an Atom service document and data feeds. It also provides more “real estate” for Report Viewer when rendering reports.

The Business Intelligence Development Studio as installed with SQL Server 2008 R2 Reporting Services supports design, creation and maintenance of both SQL Server 2008 and SQL Server 2008 R2 reports, and of Report Server projects in the SQL Server 2008 R2 version of Business Intelligence Development Studio. This means that we can open, preview, save, and deploy either version of reports or Report Server projects.
We can set Report Server project properties to specify the version of the Report Server to which we wish to deploy reports, as well as to dictate how warnings and errors we might experience, when either upgrading a report from SQL Server 2008 to SQL Server 2008 R2, or reverting a report from SQL Server 2008 R2 to SQL Server 2008, are handled.

The Report Definition Language (RDL) schema introduced in SQL Server 2008 R2 includes a new element, together with elements that define the map report item.
In order to support management operations of the report server in both native mode and SharePoint integrated mode, SQL Server 2008 R2 includes a new Web Service management endpoint, named ReportingService2010,that merges the functionalities of both the ReportingService2005 (which is used for managing objects on a report server that is configured for native mode) and the ReportingService2006 (used for managing objects on a report server that is configured for SharePoint integrated mode) endpoints. Features that are introduced in SQL Server 2008 R2, such as shared datasets and cache refresh (which we have discussed earlier), are included within the new endpoint.

Tuesday, October 19, 2010

SSRS Standards

SSRS Standards
Layout and General Display
The title of the report will be the only item in the header. If the report contains sub-reports, the header should dynamically change to display the correct title of the sub-report that is displayed.
They display useful and informative information.

Hyperlinks should be underlined and blue for better user visibility and easier navigation.
Naming Conventions
Usage of “Report”
The word “Report” is redundant as part of the title of a report. It should not be included. For example, a report for Mileage and Drive Time may be called “Excessive Mileage Trending,” not “Excessive Mileage Trending Report.”
Filenames will consist of a binomial nomenclature, the report number and the report name, both as listed in the Master Report Tracking Sheet. They will be separated by a dash (-). There will be no spaces in the file name. For example, the report that fulfills requirement 77 is report number 10.5, and its name is “RSR Call Summary Report.” Therefore, its filename would be “10.5-RSRCallSummaryReport.rdl”. All drill down reports will be start with (_) underscore and rest things will be remain same.
Report Titles
Reports should be titled according to their names in the Master Report Tracking Sheet. However, they can be changed to accommodate the requests of the appropriate business users.
Column/Row Headers
Use Normal, Trebuchet MS, 8pt, Bold, and White.
Header Cell Names
These should be defined in accordance with the header name in the report mockup that is signed off by the business.
All datasets with start with ‘ds’ and name should be title case for example: Fiscalyear so it should be dsFiscalYear

Add a Page Header to the report
Open a report. Choose the Layout view. Go to Report menu > Page Header/Page Footer > Click on it. Drag a TextBox from the tools menu to the Header/Footer and set the properties on it

Hide a Page Header on the First and Last Page of the report
Open a report. Choose the Layout view. Click the page header. In the Properties window, set the PrintonFirstPage and PrintonLastPage property to False.

Add a Page Break
Open a report. Choose the Layout view. Right click an item and choose properties. On the General tab, Page breaks, select Insert a page break before the list/after the list.

Design Your Reports for Performance and Scalability:
Can my report handle a large amount of data?
How will the server be affected when many users view a report during peak hours?
How do I troubleshoot a performance problem?
Query Design
Optimize report queries.
Retrieve the minimum amount of data needed in your report.
Rendering Formats
Choose Appropriate Rendering Formats

Tips for ETL Design

First of all we should know what exactly is the requirement?
We must have the clear knowledge of Blocking, Semi-blocking and non-blocking transformations.
Try to use the best options rather than going with any of the available options in SSIS tools.

Best Practices
#1 - Pulling High Volumes of Data
#2 - Avoid SELECT *
#3 - Effect of OLEDB Destination Settings
#4 - Effect of Rows Per Batch and Maximum Insert Commit Size Settings
#5 - SQL Server Destination Adapter
#6 - Avoid asynchronous transformation (such as Sort Transformation) wherever possible
#7 - DefaultBufferMaxSize and DefaultBufferMaxRows
#8 - BufferTempStoragePath and BLOBTempStoragePath
#9 - How DelayValidation property can help you
#10 - Better performance with parallel execution
#11 - When to use events logging and when to avoid
#12 - Monitoring the SSIS Performance with Performance Counters
#13 - Distributed Transaction in SSIS and its impact
#14 - How Checkpoint features helps in package restarting

What's New in SQL 2008

Power shell feature in Management Studio.
IntelliSence feature in Management Studio.
New data types in SSIS.
Many new enhancements in SSIS transformations.
Now C# and are the editors for SSIS scripts.
Tablix, Graphs, Maps and Gauges features.
New Data source for Reports like Azure, SQL DB and SharePoint.
Performance Improvement in all areas.
More reliable and secure database.
IIS concept replaced with https protocol services.
SQL R2 introduced recently.
Report Builder 3.0 launched.
Power Pivot feature added in SSRS (with Excel and SharePoint).
DAX – Data analysis Expressions.
New MDX functions introduced.
Report Server and Report Manager Look and feel is also changed.
Many more features as well…

What is Business Intelligence

What is Business Intelligence:

Getting the right information to the right people at the right time, so that everyone can trust on the data and use for more effective decision making.
The Interactive process of analyzing and exploring the information to divide trends and patterns
It is an approach to management that allows an organization to define what information is useful and relevant to its corporate decision.

Tools for BI:
[Reporting Services] --> Cube or any Database
[Integration Services] --> Tables, Excel, Raw/Flat files
[Analysis Services] --> Cube and Mining

BI Applications:
Oracle, Siebel, Informatica, Cognos and Microsoft SQL Server

Connecting with SQL Server: While connecting to MS SQL Server have 2 options
Management Studio as Back-End
Business Intelligence Development Studio as Designer Tool

Common Terminologies:
Integration Services
Packages, Tasks, Transformations

Reporting Services
Report Server, Report Manager, Report Builder, Performance Point Server, ProClarity, Excel Power Pivot, Share Point Server

Analysis Services
Cube, Fact, Dimension, Hierarchy, KPIs, Partitions

SQL Server 2008 for Beginners

How to create Database:



NAME = 'neuSource',
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Source.mdf' ,
SIZE = 1280KB ,
( NAME = 'neuSource_log',
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Source_log.LDF' ,
SIZE = 504KB ,
MAXSIZE = 2048GB ,

How to see databases:

Use master

Select * from Sysdatabases

How to create Table

CREATE TABLE [dbo].[Audit](
[AuditKey] [int] IDENTITY(100,10) NOT NULL,
[TableProcessKey] [int] NOT NULL,
[PackageBranchName] [varchar](50) NOT NULL,
[PackageBranchRowCount] [int] NULL,

Create Table Statement to create Foreign Key (Column Level )

USE AdventureWorks2008
SalesID INT CONSTRAINT pk_productSales_sid PRIMARY KEY,
ProductID INT CONSTRAINT fk_productSales_pid FOREIGN KEY REFERENCES Products(ProductID),
SalesPerson VARCHAR(25)


Table Level

SalesID INT,
ProductID INT,
SalesPerson VARCHAR(25)
CONSTRAINT pk_productSales_sid PRIMARY KEY(SalesID),
CONSTRAINT fk_productSales_pid FOREIGN KEY(ProductID)REFERENCES Products(ProductID)

Alter Table Statement to create Foreign Key

ALTER TABLE ProductSales
ADD CONSTRAINT fk_productSales_pid FOREIGN KEY(ProductID)REFERENCES Products(ProductID)

Alter Table Statement to Drop Foreign Key

ALTER TABLE ProductSales
DROP CONSTRAINT fk_productSales_pid;

How to create duplicate table without records

--select * into NewTable from OldTable where 1 = 2
select top 0 * into NewTable from OldTable

Create Table Statement to create Primary Key (Column Level )

USE AdventureWorks2008

ProductID INT CONSTRAINT pk_products_pid PRIMARY KEY,
ProductName VARCHAR(25)


Table Level

ProductID INT,
ProductName VARCHAR(25)
CONSTRAINT pk_products_pid PRIMARY KEY(ProductID)

-- Alter Table Statement to create Primary Key

ADD CONSTRAINT pk_products_pid PRIMARY KEY(ProductID)

-- Alter Statement to Drop Primary key

DROP CONSTRAINT pk_products_pid;

How to check table is existing or not?

IF exists (select * from sys.objects where type= 'u' and name = 'Audit')
Print 'Table Already Exists'
Print 'Need to create this table'

How to check Procedure is existing or not?

IF NOT exists (select 1 from sys.objects where type= 'p' and name = 'TestSP')
Print 'Procedure Already Exists'
@LastName nvarchar(50) = NULL,
@FirstName nvarchar(50)= NULL
SELECT * FROM dbo.ETLConfiguration
WHERE ConfigurationFilter LIKE @FirstName
AND ConfigurationFilter LIKE @LastName;

How to check Function is existing or not?

IF Not exists (select 1 from sys.objects where type= 'fn' and name = 'TestFunc')
Print 'Function does not Exists'
-- Input dimensions in centimeters.
(@CubeLength decimal(4,1),
@CubeWidth decimal(4,1),
@CubeHeight decimal(4,1) )
RETURNS decimal(12,3) -- Cubic Centimeters.
RETURN ( @CubeLength * @CubeWidth * @CubeHeight )

Few most frequently used commands:

SP_HelpText ProcedureName/Functionmname
SP_Depends TableName/StoredPRocedure

--Rename Table
EXEC sp_rename 'Sales.SalesTerritory', 'SalesTerr','Table';

--Renaming a column
EXEC sp_rename 'Sales.SalesTerritory.TerritoryID', 'TerrID', 'COLUMN';

--Renaming an index
EXEC sp_rename 'Purchasing.ProductVendor.IX_ProductVendor_VendorID', 'IX_VendorID', 'INDEX';

--Renaming an alias data type
EXEC sp_rename 'Phone', 'Telephone', 'USERDATATYPE';

Common Functions what we used on regularly basis:


SQL Server Temporary Tables Optimization Tips

SQL Server Temporary Tables Optimization Tips
Use the table variables instead of the temporary tables whenever possible. The table variable is a new SQL Server 2000 feature. The table variables are created and stored in memory while the temporary tables are created and stored in the tempdb database. So the table variables require less locking and logging resources than temporary tables and table variables should be used whenever possible. Try to avoid using temporary tables inside your stored procedure. Using temporary tables inside stored procedure reduces the chance to reuse the execution plan.
Try avoid using insensitive, static and keyset cursors whenever possible. These types of cursor produce the largest amount of overhead on SQL Server, because they cause a temporary table to be created in tempdb database, which results in some performance degradation. Use multi-statement table-valued functions to eliminate temporary table usage for intermediate result processing. Because the table variables require less locking and logging resources than temporary tables, try to use multi-statement table-valued functions instead of temporary tables. Try to avoid using temporary tables by rewriting your Transact-SQL statements to use only standard queries or stored procedures.
Use the derived tables or correlated sub-queries instead of the temporary tables whenever possible. Because temporary tables are stored in the tempdb database on the hard disk, using them can increase the number of the disk read/write operations. So you should avoid using temporary tables whenever possible. Use local temporary tables instead of SQL Server cursors. Because using temporary tables usually are more efficient than using cursors, you should use temp tables instead of cursors whenever possible. Avoid creation temporary tables from within a transaction. Creation temporary tables inside a transaction requires many locking and logging resources, can lock some system tables and can decrease the overall SQL Server performance. Avoid using global temporary tables.
SQL Server supports local and global temporary tables. Both of these table types are stored in the tempdb system databases. The local temporary tables are visible only in the current session and their names prefix with single number sign (#table_name), while the global temporary tables are visible to all sessions and their names prefix with a double number sign (##table_name). Because the local temporary tables are dropped automatically at the end of the current session, while the global temporary tables are dropped automatically only when all tasks have stopped referencing them, you should avoid using global temporary tables. Consider creation a permanent table instead of using temporary tables. If your queries contain a loop and uses temporary tables again and again, consider creation a permanent table to store the intermediate results. In this case, you will save some time because you will not create table and drop table again and again. Because all temporary tables are stored in the tempdb database, consider spending some time on the tempdb database optimization. For example, place tempdb on a fast I/O subsystem to get good performance (try to use RAID 0 disks for the tempdb database). Set a reasonable size for the tempdb database and a reasonable autogrow increment. Because automatically growing results in some performance degradation, you should set a reasonable size for the tempdb database and a reasonable autogrow increment to avoid tempdb automatically growing too often.

Types of Testing

Types of Testing:

ACCEPTANCE TESTING: Testing to verify a product meets customer specified requirements. A customer usually does this type of testing on a product that is developed externally.

BLACK BOX TESTING: Testing without knowledge of the internal workings of the item being tested. Tests are usually functional.

COMPATIBILITY TESTING: Testing to ensure compatibility of an application or Web site with different browsers, OSs, and hardware platforms. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite.

CONFORMANCE TESTING: Verifying implementation conformance to industry standards. Producing tests for the behavior of an implementation to be sure it provides the portability, interoperability, and/or compatibility a standard defines.

FUNCTIONAL TESTING: Validating an application or Web site conforms to its specifications and correctly performs all its required functions. This entails a series of tests which perform a feature by feature validation of behavior, using a wide range of normal and erroneous input data. This can involve testing of the product's user interface, APIs, database management, security, installation, networking, etc testing can be performed on an automated or manual basis using black box or white box methodologies.

INTEGRATION TESTING: Testing in which modules are combined and tested as a group.
Modules are typically code modules, individual applications, client and server applications on a network, etc. Integration Testing follows unit testing and precedes system testing.

LOAD TESTING: Load testing is a generic term covering Performance Testing and Stress Testing.

PERFORMANCE TESTING: Performance testing can be applied to understand your application or WWW site's scalability, or to benchmark the performance in an environment of third party products such as servers and middleware for potential purchase. This sort of testing is particularly useful to identify performance bottlenecks in high use applications. Performance testing generally involves an automated test suite as this allows easy simulation of a variety of normal, peak, and exceptional load conditions.

REGRESSION TESTING: Similar in scope to a functional test, a regression test allows a consistent, repeatable validation of each new release of a product or Web site.
Such testing ensures reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process.
Though regression testing can be performed manually an automated test suite is often used to reduce the time and resources needed to perform the required testing.

SMOKE TESTING: A quick-and-dirty test that the major functions of a piece of software work without bothering with finer details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

STRESS TESTING: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how.
A graceful degradation under load leading to non-catastrophic failure is the desired result. Often Stress Testing is performed using the same process as Performance Testing but employing a very high level of simulated load.

SYSTEM TESTING: Testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.

UNIT TESTING: Functional and reliability testing in an Engineering environment. Producing tests for the behavior of components of a product to ensure their correct behavior prior to system integration.

WHITE BOX TESTING: Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing.