Tags

I finally got tired of constantly having to alias the same fields and tables over and over, so i decided to write a script that uses the UTHELP table in the M2MSYSTEM database to auto generate a select statement and use the M2M label to alias the columns.

 

-- **********************************  *************************************-- 
DECLARE @table VARCHAR(200) = 'inmast' 
-- **********************************  *************************************-- 
DECLARE @clm VARCHAR(8000) = '', 
        @cmd VARCHAR(8000) = '' 

SELECT @clm = @clm + clm 
FROM   (SELECT DISTINCT clm 
        FROM   (SELECT ' [' + fcfield + '] as [' + CASE Ltrim(Rtrim(fclabel)) 
                       WHEN '' 
                       THEN 
                               Ltrim(Rtrim( 
                       fcfield)) ELSE Ltrim(Rtrim(fclabel)) END + '], ' clm 
                FROM   m2msystem.dbo.uthelp 
                WHERE  fctable LIKE '%' + @table + '%') a) b 

SET @clm = LEFT(@clm, Len(@clm) - 3) 
SET @cmd = 'SELECT ' + @clm + ' FROM ' + @table 

PRINT @cmd 

0
0
0
s2sdefault

First, I'll start by describing the problem domain:

Stored procedures are meant to be exactly what their name implies, a stored script that can be run on demand using parameters to perform a repeatable task. The literal meaning of procedure is "an established or official way of doing something".
Stored procedures are great, but all too often, people use them as a crutch for not spending the time or effort to take their linear thinking and break it down into a more appropriate flattened structure, like conditional joins and common table expressions. My personal rule of thumb is that if the end goal of the script you are writing is to generate a single dataset, then it is not a stored procedure, it's a view or a table-valued function.

Because the result set of a stored procedure can be something more than a single dataset, often times, third-party tools are not capable of using the output of an SP directly. This is how we end up with point-in-time reporting, nightly agent jobs and a heap of UD tables, or as I like to call it "Data Sprawl". Now, what was meant to return a single dataset for one of your managers has now generated several objects and a job that has to be monitored just to determine how many widgets didn't have their names entered during setup which is silly.

"But Josh, sometimes you don't have a choice but to use an SP, especially when using heterogeneous data sources!"

WRONG, kind of....

Enter the loopback linked server.

What is it?

Its a linked server,, that loops back, obviously :-) 
But seriously, that's all it is. All you have to do is create a linked server and under the name parameter just use [.] 

EXEC master.dbo.Sp_addlinkedserver 
  @server = N'.', 
  @srvproduct=N'SQL Server' 

"Okay, but how is that helpful?"

I'm glad you asked!

In the scenario that you have to create a stored procedure for one reason or another, if you have a loopback linked server, you can call out the loopback server in an openquery or openrowset and execute the SP or even the raw script in the command text, and as long as the return is a single data set, the server will see it just as it would a view or table, which allows it to be run on demand by a user in any third party application without worrying about it's support.

You can even use your openquery in a view too.

CREATE VIEW v_rpuser 
AS 
  SELECT * 
  FROM   Openquery([.], 'set nocount on; set fmtonly off; 
declare @rpuser as table ( FCUNAME varchar(20), FCINITIALS varchar(6), FCCOMPID varchar(5), FCMODID varchar(20), FCACCLVL varchar(15), FCDESC varchar(max), FCMODULE varchar(30), FCTYPE varchar(20), FNLEVEL INT, SrtOrdr INT ) 
insert into @rpuser
exec m2msystem.[dbo].[ExecuteReportProc] 294351
select * from @rpuser '
) a 

In the above example, I first shut all the other query outputs off using the SET commands, then I'm declaring a table variable, inserting data into it from a stored proc then selecting the data all INSIDE of a view!

This completely eliminates the need for there to be a job to refresh the data and allows you to have the data available live. Be sure not to use global temp tables though. They can cause some interference if more than one user is executing it.

This is invaluable when using datasets from multiple sources where you need to keep the output as a view.

 

 

0
0
0
s2sdefault

It can be very frustrating trying to track down the stored procedure where certain fields are updated when you are working with a database that you didn't create and is poorly documented (which basically describes every database I deal with).

To help cope with that, I wrote a simple script that locates the presence of a string in any stored procedure, in any database for the instance you are connected to.

Removing the OBJECTPROPERTY call in the DSQL statement will cause it to search through all object definitions including tables and views...

 

-- ********************************** Place Search Term Here *************************************-- 
DECLARE @findtext AS VARCHAR(1000) = 'text' 

-- ********************************** Place Search Term Here *************************************-- 
IF Object_id(N'tempdb..##tbl') IS NOT NULL 
  BEGIN 
      DROP TABLE ##tbl 
  END 

CREATE TABLE ##tbl 
  ( 
     objname VARCHAR(1000) 
  ) 

DECLARE @cmd AS NVARCHAR(4000) 

SET @cmd = ' USE [?] INSERT INTO ##tbl SELECT ''[?].['' + max(OBJECT_SCHEMA_NAME(id)) + ''].['' + OBJECT_NAME(id) + '']'' AS found_in FROM syscomments WHERE [text] LIKE ''%' + @findtext 
           + '%'' AND OBJECTPROPERTY(id, ''IsProcedure'') = 1 GROUP BY OBJECT_NAME(id)' 

EXEC Sp_msforeachdb 
  @cmd 

SELECT * 
FROM   ##tbl 
ORDER  BY objname 

IF Object_id(N'tempdb..##tbl') IS NOT NULL 
  BEGIN 
      DROP TABLE ##tbl 
  END 

0
0
0
s2sdefault

Everyone knows how frustrating dealing with SQL backups can be.

This is a script to automatically generate the truncate statements for all databases on a SQL server and order them biggest to smallest.

Just run the script on a SQL server, then copy the output of the [cmd] column in a new query window for the database you'd like to shrink.

Be aware though, this will cause your point in time recoverability to restart at the point this script is run.

Be sure you understand what the ramifications of this are.

 

SELECT d.NAME, 
       log_size / 1024 AS log_size_mb, 
       'USE [master]; ALTER DATABASE [' + d.NAME + '] SET RECOVERY SIMPLE WITH NO_WAIT; USE ' + d.NAME 
       + '; DECLARE @FILENAME AS VARCHAR(2000); SELECT @FILENAME=name FROM sys.database_files WHERE type_desc = ''LOG''; DBCC SHRINKFILE(@FILENAME, 1); ALTER DATABASE [' + d.NAME + '] SET RECOVERY FULL WITH NO_WAIT; '               AS cmd 
FROM   sys.databases AS d 
       INNER JOIN (SELECT Rtrim(instance_name) [database], 
                          cntr_value           log_size 
                   FROM   sys.dm_os_performance_counters 
                   WHERE  object_name = 'SQLServer:Databases' 
                          AND counter_name = 'Log File(s) Used Size (KB)' 
                          AND instance_name <> '_Total') s 
               ON d.NAME = s.[database] 
WHERE  d.NAME NOT LIKE 'Report%' 
ORDER  BY log_size DESC 

 

0
0
0
s2sdefault

I’ve been seeing more stuff come in about more and more chip level vulnerabilities and it has bothered me enough that I feel like I need to get on my soapbox for a minute…

Any VMM level mitigation, especially disabling hyperthreading which is what they are implying in the last paragraph will cause serious performance degradation. The real solution is to patch the hypervisor to prevent code execution in the first place while the overall situation can be assessed. The issue is that any higher level patch doesn’t actually eliminate the flaw, it just prevents its exploitation, and all it takes is a clever person enough time and motivation to sidestep any countermeasures.

From my perspective, they are just defining another specific instance of a universal truth of predictive memory management, which is that it’s exploitable, it’s a byproduct of its function.

The real issue is we’ve become reliant on that type of memory management and expect it in our performance metrics.. Unless we come up with an entirely new way to manage memory, there are two options;

Accept these fundamental issues with predictive memory management and demand software vendors become more nimble to patch zero-day for exploits of the vulnerabilities OR completely give up on predictive memory management and accept that we are nearing the physical limits in performance of the materials we use to manufacture silicon and that any attempt to increase performance by ‘fudging’ performance using a predictive algorithm will create an unacceptable security threat vector…

Everyone is talking about these things from a ‘What are we going to do about this’ perspective, but the truth is, there really isn’t anything that can be done until the more basic underlayers are communally understood….

  

I’m done…

0
0
0
s2sdefault