lobsterpot_logo1
PartnerLogo_TwoGold_TwoSilver

LobsterPot Solutions™ is proud to be a Gold Competency Partner in the Microsoft Partner Network.

Welcome to
LobsterPot.


Improving your data story.

LobsterPot Solutions™ is an Australian company with staff in Adelaide, Melbourne and Canberra, offering consultancy and training services around Australia and to the rest of the world. LobsterPot Solutions specialises in SQL Server, including performance tuning, Business Intelligence, and more.
We can turn your data into information.

Performance Tuning

Performance Tuning

Dashboards and Reporting

Dashboards/Reporting

Proofs of Concept

Proofs of Concept

More Services here.  

News and Events

Australia’s only Parallel Data Warehouse trainer

In fact, not just Australia – but in the whole Asian region!

When Microsoft ran training in Parallel Data Warehouse in Australia, LobsterPot Solutions sent three staff members. PDW is a growing area which is going to be of vital importance for many organisations as they need to meet the needs of ever-expanding data, and naturally of great interest to SQL-centric companies like LobsterPot. Following the course that Rob Farley attended, he travelled to Auckland and became the first PDW trainer in the Asian region outside Microsoft.

Within Australia there are a number of organisations that are now capable of implementing a PDW solution for a customer, but only LobsterPot Solutions can also provide a trainer to make sure that staff have received the necessary training to get the most out of a PDW implementation.

Cloud – the forecast is improving

There is a lot of discussion about “the cloud”, and how that affects people’s data stories.

Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built.

more...

In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper.

If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets.

You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else.

The cloud offers an alternative to this, and the human element is an obstacle.

By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud.

But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have.

In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept.

Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close.

Two SQL MVP awardees at LobsterPot Solutions

LobsterPot Solutions is the only company with two current Australian SQL MVP awardees on staff, after Julie Koesmarno received her first Microsoft Most Valuable Professional award. She joins Rob Farley, who is now an eight-time awardee, as one of only ten SQL MVPs in Australia.

more...

Microsoft gives this award to “exceptional, independent community leaders who share their passion, technical expertise, and real-world knowledge of Microsoft products with others” (mvp.microsoft.com). The award’s status is extremely high, making it one of the most sought-after awards in the Microsoft ecosystem. Julie joins an elite crew in receiving this, no doubt influenced by her involvement in technical communities in roles such as conference presenter, webinar host and community leader.

Recently relocated to Canberra, Julie is about to jet off to present about Business Intelligence at the PASS Summit, this year held in North Carolina. The PASS Summit is the largest SQL Server event in the world.

LobsterPot Solutions appoints Martin Woodcock as General Manager to drive business expansion

martinwMicrosoft Gold Partner LobsterPot Solutions – a leading business intelligence and SQL Server tuning/performance management software consultancy – has announced the appointment of Martin Woodcock as General Manager to underpin its expansion throughout Australia.

more...

Martin has spent the past 5 years providing sales management consulting to clients in Adelaide and Melbourne. Previously he held a variety of positions at Oracle Corporation in Europe, from UK Business Intelligence Sales Manager to Sales Vice President, South-East Europe.

Rob Farley, CEO of LobsterPot Solutions, believes that ‘Martin will bring a great deal of business development experience and his previous track record in business intelligence will supplement our business analysis resources. He will also be responsible for project management’.

Martin ‘looks forward to working alongside Rob and his colleagues – complementing their in-depth technical expertise’. His role will be part-time and he will continue his sales management consultancy with other clients through his own company, Two Hemispheres.

For further information, contact: Martin Woodcock, General Manager, LobsterPot Solutions; martin.woodcock@lobsterpot.com.au; 0414 960559.

Microsoft Certified Master in SQL Server

As mentioned in that earlier post, Rob took the MCM exams recently.MCM5

Well, he passed!

He now joins a very select few people in the world to be both Microsoft Certified Master, Microsoft Certified Trainer and Microsoft Most Valuable Professional. This makes him and LobsterPot an even better fit for your SQL Server projects.

more...

Microsoft Certified Master is described as the top technical certification available in SQL Server. It is so select that Microsoft even lists all the people who have this certification publicly. You’ll see Rob on that list soon too.

Microsoft Certified Trainer indicates that someone not only has sufficient technical expertise , but has the presentation skills and aptitude to be able to teach in a classroom environment. LobsterPot values these skills, and has three MCTs on staff, all able to deliver the highest quality SQL Server training and mentoring for your organisation. This is an important aspect of consulting – being able to explain concepts clearly and effectively.

The Microsoft Most Valuable Professional award is another exclusive award, given out to people who have demonstrated great influence in the technical community. All the LobsterPot staff are involved in the SQL Server community, which is a way that we contribute back to the SQL Server world. It also means that we have an excellent network of resources, just in case.

To have all three is significant. Congratulations Rob.

MCM exam at Academy IT

You won’t find Adelaide’s Academy IT on a list of places you can sit the Microsoft Certified Master exams yet (maybe one day). However, if you choose the remote option for the MCM Lab exam, they can probably help.MCM

Academy IT do training, with easily the best training centre in Adelaide…

more...

…They recently expanded to have a third large training room, with machines that each have 32GB of RAM and dual 23” monitors – one hooked up to the trainer’s desktop and an interactive whiteboard. It’s very impressive, and an excellent option for courses. On top of that, they’re also a great place to take Prometric and Pearson VUE exams – including the Microsoft ones. When the beta exam period came in for SQL Server 2012, LobsterPot Solutions staff used Academy IT’s facilities to take them. Whenever LobsterPot staff need to take any exams, Academy IT is the location of choice.

LobsterPot’s Rob Farley successfully took the SQL Server MCM Knowledge exam (88-970) at the PASS Summit in Seattle. However, the real test with the MCM is the Lab exam. Many people take this exam and fail – passing at the first attempt is rare. Rob took the exam on December 20th, 2012, at Academy IT, using the remote option.

What the remote option means is that the candidate provides their own hardware, with an internet connection and an external web cam. During the session, a web cam broadcasts what’s going on in the room to a proctor, who keeps an eye on both that and what is happening in a LiveMeeting or Lync session, sharing the full screen. It’s like having a test centre without having a test centre. In Rob’s case, the proctor was based in Charlotte, NC, USA.

There are Prometric testing centres venues around the world where the MCM Lab exam can be taken without using the remote option. However, the exam involves a remote desktop session to a computer somewhere else in the world, and there is typically latency hassles. Furthermore, testing centres don’t tend to have a reputation for providing particularly good hardware for their exam machines. Monitors don’t always offer particularly good resolution, and machines are often quite laggy in themselves. We heard stories of a candidate who took the exam just this past week at a testing centre in the US, who had an awful time, losing time from unplanned reboots, an insufficiently large monitor, and more.

Taking the exam at Academy IT won’t stop the latency problem – the signal still needs to get to somewhere in America and back. But Academy IT were able to offer Rob a well-spec’d PC with a large monitor and an easily fast-enough internet connection. As it was, he opted to use his laptop for familiarity of the keyboard, and he had brought his own external webcam too. But Academy IT had excellent facilities to be able to ensure that there was no technical reason for Rob not to pass the exam. They want people to have the best experience in taking exams, and were able to provide an opportunity to take the MCM exam in an excellent environment.

Hopefully Rob passed, and will soon officially be a Microsoft Certified Master in SQL Server. Big thanks also to Academy IT for playing their part in the process.

Replication explained

Replication can be a tricky thing for people to get their heads around. At LobsterPot Solutions we’re asked about it on a regular basis. Luckily, our own Rob Farley has put a post together about it, which has been cross posted here.

more...

I spent some time explaining SQL Server Replication to someone recently. They said they hadn’t ever really understood the concepts, and that I’d managed to help. It’s inspired me to write a post that I wouldn’t normally do – a “101” post. I’m not trying to do a fully comprehensive piece on replication, just enough to be able to help you get the concepts.

The way I like to think about replication is by comparing it to magazines. The analogy only goes so far, but let’s see how we go.

The things being replicated are articles. A publication (the responsibility of a publisher) is a collection of these articles. At the other end of the process are people with subscriptions. It’s just like when my son got a magazine subscription last Christmas. Every month, the latest set of articles got delivered to our house. FOUR-FOUR-TWO_JAN-13(The image here isn’t my own – but feel free to click on it and subscribe to FourFourTwo – excellent magazine, particularly when they’re doing an article about the Arsenal.) Most of the work is done by agents, such as the newsagent that gets it to my house.

In SQL Server, these same concepts hold. The objects which are being replicated are articles (typically tables, but also stored procedures, functions, view definitions, and even indexed views). You might not replicate your whole database – just the tables and other objects of interest. These articles make up a publication. Replication is just about getting that stuff to the Subscribers.

Of course, the magazine analogy breaks down quite quickly. Each time my son got a new edition, the articles were brand new – material he’d never seen before. In SQL Replication, the Subscribers probably have data from earlier. But this brings us to look at a key concept in SQL Replication – how the stupid thing starts.

Regardless of what kind of replication you’re talking about, the concept is all about keeping Subscribers in sync with the Publisher. You could have the whole table move across every time, but more than likely, you’re going to just have the changes go through. At some point, though, the thing has to get to a starting point.

This starting point is (typically) done using a snapshot. It’s not a “Database Snapshot” like what you see in the Object Explorer of SQL Server Management Studio – this is just a starting point for replication. It’s a dump of all the data and metadata that make up the articles, and it’s stored on the file system. Not in a database, on the file system. A Subscriber will need this data to be initialised, ready for a stream of changes to be applied.

It’s worth noting that there is a flavour of replication which just uses snapshots, known as Snapshot Replication. Every time the subscriber gets a refresh of data, it’s the whole publication that has to flow down. This might be fine for small pieces of data, it might not for others.

(There are other ways to get started too, such as by restoring a backup, but you should still be familiar with the concept of snapshots for replication.)

To get in sync, a subscriber would need the data in the snapshot for initialisation, and then every change that has happened since. To reduce the effort that would be required if something went drastically wrong and a new subscription became needed, snapshots can be recreated at regular intervals. This is done by the Snapshot Agent, and like all agents, can be found as a SQL Server Agent job.

The middle-man between the Publisher and the Subscribers is the Distributor. The Distributor is essentially a bunch of configuration items (and as we’ll see later, changes to articles), stored in the distribution database – a system database that is often overlooked. imageIf you query sys.databases on a SQL instance that has been configured as a Distributor you’ll see a row for the distribution database. It won’t have a database_id less than 5, but it will have a value of 1 in the is_distributor column. The instance used as the Distributor is the one whose SQL Server Agent runs most of the replication agents, including the Snapshot Agent.

If you’re not doing Snapshot Replication, you’re going to want to get those changes through. Transactional Replication, as the name suggests, involves getting transactions that affect the published articles out to the subscribers. If the replication has been set up to push the data through, this should be quite low latency.

So that SQL Server isn’t having to check every transaction right in the middle of it, there’s a separate agent that looks though the log for transactions that are needed for the replication, copying them across to the distribution database, where they hang around as long as they’re needed. This agent is the Log Reader Agent, and also runs on the Distributor. You can imagine that there is a potential performance hit if this is running on a different machine to the Publisher, and this is one of the influencing factors that means that you’ll typically have the Distributor running on the Publisher (although there are various reasons why you might not).

Now we have a process which is making sure that initialisation is possible by getting snapshots ready, and another process which is looking for changes to the articles. The agent that gets this data out to Subscribers is the Distribution Agent. Despite its name, it can run at the Subscriber, if the Subscriber is set to pull data across (good for occasionally connected systems). This is like with my magazine – I might prefer to go to the newsagent and pick it up, if I’m not likely to be home when the postman comes around. In effect, my role as Subscriber includes doing some distribution if I want to pull the data through myself.

These three agents, Snapshot Agent, Log Reader Agent and Distribution Agent, make up the main agents used for Transactional Replication, which is probably the most common type of replication around. Snapshot Replication doesn’t use the Log Reader Agent, but still needs the other two.

Now let’s consider the other types of replication.

imageMerge Replication involves having subscribers that can also change the data. It’s similar to Transactional Replication with Updateable Subscribers, which has been deprecated. These changes are sent back to the Merge Agent, which works out what changes have to be applied. This is actually more complicated than you might expect, because it’s very possible to have changes made in multiple places and for a conflict to arise. You can set defaults about who wins, and can override manually through the Replication Monitor (which is generally a useful tool for seeing if Subscribers are sufficiently in sync, testing the latency, and so on). Updateable Subscribers end up using the Queue Reader Agent instead of the Merge Agent. They’re slightly different in the way they run, but I consider them to be quite similar in function, as they both involve getting the data back into the publisher when changes have been made elsewhere.

Peer-to-Peer Replication is the final kind. This is really a special type of Transactional Replication, in which you have multiple publishers, all pushing data out at each other. It’s the option that is considered closest to a High Availability system, and is good across geographically wide environments, particularly if connections are typically routed to the closest server. Consider the example of servers in the UK, the US and Australia. Australian users can be connected to the local server, knowing the changes are going to be pushed out to the UK and US boxes. They’re set up in a topology, with each server considered a node. Each server keeps track of which updates it’s had, which means they should be able to keep in sync, regardless of when they have downtime. If Australian changes are sent to the UK but not the US, then US can be updated by the UK server if that’s easier.

Replication can feel complex. There are a lot of concepts that are quite alien to most database administrators. However, the benefits of replication can be significant, and are worth taking advantage of in many situations. They’re an excellent way of keeping data in sync across a number of servers, without many of the server availability hassles associated with log-shipping or mirroring. It can definitely help you achieve scale-out environments, particularly if you consider Peer-to-Peer, which can help you offload your connections to other servers, knowing the key data can be kept up-to-date easily.

I haven’t tried to be completely comprehensive in this quick overview of replication, but if you’re new to the concepts, or you’re studying for one of the MCITP exams and need to be able to get enough of an understanding to get you by, then I hope this has helped demystify it somewhat.

There’s more in SQL Books Online, of course – a whole section on Replication. If what I’ve written makes sense, go exploring, and try to get it running on your own servers too. Maybe my next post will cover some of that.

@rob_farley

Roger Noble – Outstanding Volunteer!

Every month, PASS awards someone in the global SQL community their Outstanding Volunteer award. In September, it was LobsterPot Solutions’ Roger Noble! Roger wrote about it on his blog, so head over to there and read it. Obviously here at LobsterPot we’re very proud of him, having seen how much effort he puts in, running the BI Virtual Chapter Aus edition, being on the Program Committee for Summit and even creating a Pivot collection of the Summit sessions. Congratulations, Roger!

LobsterPot involvement at the PASS Summit

It goes without saying that LobsterPot Solutions is a great supporter of the SQL Server community, and PASS in particular. I thought it would be good to look at how each of the LobsterPot staff is going to be involved at the upcoming PASS Summit.

more...

Rob’s been on the PASS board for over a year now, and is set to continue in this for some time. As a user group leader and current co-ordinator of the PASS’s semi-annual 24HOP event, he has a number of meetings and duties to go alongside the Board Meetings that will be on. If you’re going to be at the event, you shouldn’t have too much trouble finding Rob – he’ll even be involved in prayer meetings in the early mornings.

Roger is speaking this year! His session is on Thursday afternoon, and is on Reporting Services. As well as this, 2012 has seen Roger’s PASS involvement extend into leading the Australian arm of the BI Virtual Chapter. This means he’s been finding speakers for the monthly meetings he hosts, and you might see him wandering around the Summit in a VC leaders’ jacket.

Julie has been a co-presenter at the PASS Summit before, but this year is only giving a Lightning Talk. I shouldn’t say ‘only’, because the pressure in delivering ten minutes of content is at least as high as giving an hour or more. The Lightning Talks are punchy and poignant, energetic and expert-level. Julie’s also leading a First-Timers’ group at the Summit, and speaking at SQL Saturday events in Portland and Olympia either side of the Summit.

Ben and Heidi are both experiencing their second PASS Summit, and like Julie are both involved with the First-Timers. Ben is leading a First-Timers’ group and Heidi has been on the First-Timers’ committee this year, working on a team to find ways to help people at their first summit get the most out of it. The first time anyone experiences a PASS Summit, with 4000 people excited to learn about SQL Server and mix with other SQL Server experts, it can be hard to know what’s important. Being in an Australian time zone can be tricky for organising committees, and Heidi has really gone a long way in helping these people. Ben, Julie and Heidi will have a great time helping people realise the most.

This is Martin’s first PASS Summit. He’s experienced conferences around the world before, including a number of SQL Bits events in the UK. He’s taking part in the First-Timers’ event, but from the other side of the room.

All of us certified in SQL Server 2012

MCSE_ExpMicrosoft have introduced some new certifications for SQL Server 2012. Instead of Microsoft Certified Technology Specialist and IT Professional, they have moved to Solutions Associate and Solutions Expert with a new set of exams. All the LobsterPot consultants sat some of the new exams in beta, and all of us are now certified in SQL Server 2012!

more...

Rob sat all seven exams and picked up both MCSE: Data Platform and MCSE: BI. Ashley got MCSE: BI too, while Martin and Roger are both only one exam off getting MCSE certifications. Ben and Heidi also passed exams, and in doing so, Heidi has become a brand new Microsoft Certified Professional!