LavaBlast Software Blog View RSS

Help your franchise business get to the next level.
Hide details



FranchiseBlast Helps Boomerang Kids Franchise Expand 25 Jun 2014 10:39 AM (10 years ago)

FranchiseBlast has been featured in a recent case study by Intel Canada.

Large franchises have sophisticated software to help franchise owners manage sales, order product, control inventory, and manage other aspects of the business. Until recently, these systems were too expensive for smaller franchisors to implement. FranchiseBlast, a software solution developed by Gatineau-based LavaBlast Software Inc., makes available the management, inventory and purchasing tools a franchisor needs, at a cost that growing franchises can afford, while providing IT support to keep systems running reliably.

Read more here!

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Gotcha: String or binary data would be truncated. 18 Mar 2014 8:42 AM (11 years ago)

I was playing in SQL Server this morning, trying to fix an odd bug. Took me a while to find it and I thought I’d share this tidbit with you.

Here’s an overly simplistic representation of what was causing the “String or binary data would be truncated.” error message:

   1:  declare @where nvarchar(max);
   2:  -- assume something put a large string in @where (over 8000 characters). 
   3:  declare @sql nvarchar(max);
   4:  select @sql = replace(‘select * from table1 where {0} order by column1 asc’, ‘{0}’, @where);
   5:  exec (@sql);

 

The reason I experienced this error is because of how replace handles nvarchar(max). By explicitly casting the first parameter to nvarchar(max), the error is resolved.

   1:  declare @where nvarchar(max);
   2:  -- assume something put a large string in @where (over 8000 characters). 
   3:  declare @sql nvarchar(max);
   4:  select @sql = replace(cast(‘select * from table1 where {0} order by column1 asc’ as nvarchar(max)), ‘{0}’, @where);
   5:  exec (@sql);

 

From the documentation:

If string_expression is not of type varchar(max) or nvarchar(max), REPLACE truncates the return value at 8,000 bytes. To return values greater than 8,000 bytes, string_expression must be explicitly cast to a large-value data type.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Lessons learned during 48hrs in the Valley 11 Dec 2013 6:00 AM (11 years ago)

We recently attended the C100's flagship event named “48 hrs in the Valley” and want to share some key moments, lessons learned and observations. The event was filled with so many activities that it is difficult to distill everything into a concise picture but we'll give it our best shot. Before we get started, we'd like to take a moment to thank the C100's organizing committee for their great work. Coordinating this type of event is very challenging work and we appreciate it the effort put into it.

Key Moments

Picture by Kris Krüg Rob Burgess' insightful talk is the first things that come to mind when looking back at the event. Coming from a web design & development background, it was awesome to hear the inside story behind Flash. After becoming CEO of Macromedia, Rob had the foresight to pretty much cancel all development on the company's main revenue source (tools like Shockwave) and re-orient resources towards building products for the web (aka Flash). Given how drastically the industry has changed, this was the right decision but the amount of guts it took to perform this pivot is mind boggling. Pivots in a startups are difficult, but completely re-orienting a successful & profitable company with tons of money in the bank is much more challenging.

We also were fortunate to be matched with Debbie Landa for one of our one-on-one mentoring sessions. What started out with “I know nothing about franchises” concluded with a plan to revolutionize the franchise industry. By making parallels to the venture capital world, the future appeared obvious to us and we validated that FranchiseBlast's in a great position to completely alter the industry. Debbie had the energy and big vision we expected to find in the Valley. Combined with the open-mindedness to learn new things and the creativity required to challenge assumptions, these traits guarantee success regardless of your geographical location.

Being a bootstrapped startup not looking for funding, pitching to venture capitalists was also an interesting change of pace. The dynamics of each pitch was completely different. The first presentation was made to an analytical VC with a great poker face. Razor-sharp questions followed in quick succession to lead up to very insightful comments. It was the toughest meeting, but also one of the most valuable. Our second presentation was characterized by stellar flow: each slide was followed by a question answered on the next slide. It was a short meeting due to time constraints, but even in this short blitz one could sense the intellectual alignment. It's great to work with people with whom you can have fast-paced exchanges. Our third pitch slowed things down as we were given twice as much time as allotted and ended up being a conversation more than a pitch. This VC had domain expertise not found in the other meetings which lead the discussion in a completely different direction. The final pitch ended up being the easiest (emotionally) with great validation but few challenges. Putting myself in their shoes, though, I understand how gruelling it can be to deliver insights which can push companies to the next level, pitch-after-pitch.

Finally, I enjoyed the “both sides of the deal” talk where a startup and their VC discussed their deal from different perspectives. Not only was it extremely funny, it was also very insightful. Rather than discuss the specifics, let us dive into key lessons learned – some of which emanated from this talk.

Key Lessons Learned

Picture by Kris Krüg Although we learned a lot during these 48 hours, we didn't necessarily learn anything explicitly taught. These lessons learned materialized after talking to enough people in Silicon Valley and reflecting on their thought process.

First, the importance of shared vocabulary cannot be overstated. In the software world, best practices are often boiled down to design patterns. When two software engineers have internalized concepts behind these patterns, they can propose & refine software architectures very efficiently. The same shortcuts apply to everything in the Valley: software, finance, companies, people, eras and methodologies. While we do not personally stay abreast of every hot new startup mentioned in tech news and feel it gets in the way of getting things done, we acknowledge that shared vocabulary is critical. In particular, being aware of some of the key events which shaped the technology industry in the past and general knowledge of current trends helps us align ourselves with success and avoid repeating past failures.

Furthermore, having intimate knowledge of the people behind those events is key. In our early days, we saw networking events as a chance to meet interesting people. We went into an event not expecting much and that's precisely what we got: nothing much. However, we unknowingly started to build a network of peers and, after a few years, we're now connecting some dots. We can start transposing our concrete needs onto the desire to meet concrete individuals – or at least give our interlocutor enough information to help guide us to a person which meets our criteria. Although you may randomly bump into the perfect contact, it is much more efficient to do your homework and seek out individuals yourself. As an aside, we purposefully dedicated some time during the event to plugging other local startups (Exocortex, Shopify, Project Speaker, etc.) when meeting relevant individuals because we firmly believe that we're not only founders, we're ambassadors for other startups in our community. “A rising tide lifts all ships”, as Scott Annan often says speaking to the Ottawa startup community.

Picture by Etienne Tremblay We also discovered that the more successful your company becomes, the lonelier it becomes for the founders. By this we don't mean people start ignoring you or despise you to the points of throwing rocks in your direction. No, in fact, we mean that the essence of loneliness is derived from the fact that you can't talk about your fears, successes, challenges or motivations with anyone else. To help illustrate this fact, visualize entrepreneurship as a pyramid of thousands of layers where the dimensions of each layer represents the number of likeminded individuals & companies. When you first start out at the base, pretty much anyone can give you valuable business advice. However, as your business grows, the value of this advice diminishes. This causes you to look elsewhere (higher-up in the pyramid) for high-impact advice, but it becomes exponentially more difficult to find it. As an example, when you've raised venture capital, you may find that there is a limited pool of likeminded entrepreneurs in your city with whom you can discuss your challenges; this forces you to branch out. We believe the same logic holds for every major transition in your company's lifecycle, from your first part-time freelancing gig to IPO to managing a trillion dollar company. In the technology industry, we believe the entrepreneurship pyramid reveals Silicon Valley's greatest asset for founders: a greater density of likeminded individuals to accompany you in your journey.

Key Observations / Thoughts

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

FranchiseBlast invited to the C100’s 48hrs in the Valley 20 Nov 2013 4:15 PM (11 years ago)

48hrs 48hrs in the Valley is the C100's flagship mentorship program put on in conjunction with the Canadian Consulate of San Francisco and Palo Alto. Twice a year the C100 invites 20 of Canada's most promising startups to the Silicon Valley for two days of mentorship, workshops, investor meetings, strategic partner visits and networking.

FranchiseBlast is proud to have been selected for this exciting event.  We started the company six years ago, wrote our software startup lessons learned series, and have been continuously improving our product and our company since then. Our focus has increased and so has our drive. It’s been a great ride to date and we know we’re at an inflection point in the company’s journey.

Once the dust settles, we’ll collect our thoughts and write about our experience, just like we did for Ottawa’s Lead To Win program.

In the meantime, we would like to remind you that we are actively hiring. Join us.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

ASP.NET translation tools & gotchas 8 Mar 2013 7:35 AM (12 years ago)

We’ve recently translated one of our applications and thought we’d share the tools & techniques we used. In particular, every time we perform some ASP.NET translation, we hit a few gotchas. We kept facing the same problems every time we worked on translation, so we figured we might as well write everything down in a post for everyone’s benefit.

Technique

Because we’re translating an ASP.NET WebForms application, the main process is to open an *.aspx or *.ascx, switch to Design view, and perform Tools –> Generate Local Resource. This generates a *.resx file and adds the relevant markup in your source file. Tools are available to perform the actual translation and create the *.resx files in other languages.

The core issue here is that you need to perform this operation for each individual file. Potentially thousands of times and/or until you go crazy. (Personally, I find it frustrating that bulk resource generation is not a core VS.NET feature. )

 

Step 1 - Bulk Generate Local Resource

Instead of wasting our time opening each individual file, we found a macro on this forum. The macro does not run in VS.NET 2012, so we loaded up our old VS.NET 2010 and ran it from there.

The macro wasn’t flawless – it sometimes randomly crashed after processing files for half an hour. Deleting Visual Studio’s *.suo file and restarting it seemed to help.

 

Step 2 - Realize that VS.NET corrupted your files

I assume one of the reasons bulk resource generation is not a core VS.NET feature is because the feature is (in addition to being slow) partially broken.

Gotcha: Inline scripts/comments are sometimes deleted.

At a high level, any script blocks in your *.aspx/*.ascx files are vulnerable to deletion. Generate Local Resource will simply strip them out if they are contained in an <asp:UpdatePanel …/>.   We filed a bug on Microsoft Connect which was not deemed important enough to be fixed.

This is appalling because it will introduce pernicious bugs in your application that only show up at runtime, if you don’t pay close attention to each and every individual file.

For example:

<script>alert('<%="Some Constant" %>');</script>
<script>alert('<%= btnSomething.ClientID %>');</script>
<%-- <asp:Button runat="server" id="btn" Text="Some button that I may need to re-enable later"/> --%>
<% 
if (Request.QueryString["test"]=="bye") 
    Response.Write("Goodbye World"); 
else 
    Response.Write("Hello World"); 
%>

Would become the following, after Generate Local Resource, because everything

<script>alert('');</script>
<script>alert('');</script>

 

Admittedly, some of the inline code above is bad practice.  However, the silent deletion causes needle-in-a-haystack type bugs at runtime.

We decided to remove all of our inline code blocks from our code to avoid having issues during local resource generation.

Gotcha: culture=”auto” and uiculture=”auto” is added to all *.aspx files

These values, added in the *.aspx header, force the page to change culture based on the browser’s settings. In our application, this was not desirable as it by-passed the logic defined in our Global.asax file. (Our users can change their language via the web applications itself, not via their web browser settings.)

For more information, see this post by Rick Strahl.

Gotcha: Nested controls can be problematic

When trying to localize a LinkButton containing an Image and literal, the Image will be dropped.

<asp:LinkButton ID="lnkHello" runat="server" OnClick="lnkHello_Click">
    <asp:Image ID="imgEdit" runat="server" ImageUrl="~/images/icons/edit.gif"></asp:Image>
    HELLO!
</asp:LinkButton>


Becomes:

<asp:LinkButton ID="lnkHello" runat="server" OnClick="lnkHello_Click" meta:resourcekey="abcdef">    
    HELLO!
</asp:LinkButton>

 

To solve this issue, the nested controls must be separated.

Gotcha: Ajax:Accordion breaks during Generate Local Resource

If you are using <ajax:Accordion ../> from the ASP.NET Ajax control toolkit, be aware that it will be corrupted after generating *.resx files. The fix is simple: delete the erroneously added Accordion Extender.

 

Step 3 – Extract other hardcoded strings.

Your *.aspx/*.ascx files and your *.cs files may contain additional strings which must be extracted. Back in 2008, we had create a Macro to help with this process but in this iteration, we simply used JetBrains ReSharper. The VS.NET plugin made it easy to find strings which had not been extracted, and push them into *.resx files.  ReSharper is jam-packed with other useful features, but we’ve found that it does have a significant impact on performance in our solution.

 

Step 4 – Perform the actual translation

Back in 2008, we released a web application to help translate RESX files. We’re no longer using this application – there are better options out there. We picked Zeta Resource Editor and it worked nicely.

Conclusion

The tools available today are much better than they were five years ago, but one piece of the puzzle (Generate Local Resource) is still far from perfect. We’d love to see an improved version (in either VS.NET or ReSharper) which would:

PS: Big thanks to @plgelinas for his research efforts for this project.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

jQuery plugin to postback an ASP.NET button 20 Aug 2012 7:52 AM (12 years ago)

We use jQuery a lot here at LavaBlast, but we also use ASP.NET webforms. We needed a simple reusable way to cause a postback on an asp.net managed Button or LinkButton.

Here is how it would be used for <asp:Button ID=”btShow” runat=”server” OnClick=”DoSomething” />

// Cause btShow to postback to the server
$('[id$="btShow"]').postback();

If you are not too familiar with jQuery, the selector [id$=”btShow”] search for any control with an id which ends with “btShow”.

Since ASP.NET 4.0, you could also use the new ClientIDMode=”Static” property on the server control to be able to have a static ID on the client and use a jQuery selector like this: $(‘#btShow’), but this is the matter of another discussion completely.

The postback() method is a jQuery plugin which I include here:

(function ($)
{
    $.fn.extend({
        postback: function ()
        {
            return this.each(function ()
            {
                if (this && "undefined" != typeof this.click)
                    this.click();
                else if (this && this.tagName.toLowerCase() == "a" && this.href.indexOf('javascript:') == 0)
                    eval(this.href.toString().replace('javascript:', ''));
            });
        }
    });    
})(jQuery);

Feel free to use this and let us know if you find any problems with the code.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Style ASP.NET Web Forms Validators with qTip 2 13 Aug 2012 5:20 AM (12 years ago)

View demo | Download source

The default validators inside ASP.NET Web Forms are quite uninteresting and require some styling work to look adequate.  Recently, we’ve been using the qTip2 jQuery library and we love it.  qTip enables you to add visually pleasant tooltips to any element.  For example, you simply add a “title” attribute to any element and then apply qTip to this element and the “title” attribute will be used as the tooltip’s text.  This is the simplest use case.  Here’s an example; with our FranchiseBlast registration form.

image

When you try to submit this form and the validation doesn’t pass, we replaced the default ASP.NET validators with styled qTip tooltips beside each validated element.

SNAGHTML6770ee44

Like you can see, the validators have absolute positioning, which enables them to flow outside of the bounds of the registration panel.  We could also easily change the position of the bubble in relation to the validated element and also change the bubble tip position.

Let’s take a look at what was needed to accomplish this, using a simple ASP.NET project. Here is the main ASP.NET code for the ASPX page.  Nothing fancy: a simple form with some validators:

Default.aspx

<asp:ScriptManager ID="p" runat="server">
    <Scripts>
        <asp:ScriptReference Path="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js" />
        <asp:ScriptReference Path="~/Scripts/qtip/jquery.qtip.min.js" />
        <asp:ScriptReference Path="~/Scripts/validators.js" />
    </Scripts>
</asp:ScriptManager>
<fieldset class="Validate" style="width: 300px">
    <legend>Tell us about yourself</legend>
    <div>
        <span class="label">Business Name:</span>
        <asp:TextBox ID="txtBusinessName" runat="server" />
        <asp:RequiredFieldValidator ID="rfvBusinessName" runat="server" ControlToValidate="txtBusinessName" Text="Your business name is required" SetFocusOnError="true" EnableClientScript="true" />
    </div>
    <div class="alternate">
        <span class="label">Your Name:</span>
        <asp:TextBox ID="txtYourName" runat="server" />
        <asp:RequiredFieldValidator ID="rfvName" runat="server" ControlToValidate="txtYourName" Text="Your name is required" SetFocusOnError="true" EnableClientScript="true" />
    </div>
    <div>
        <span class="label">Your Email:</span>
        <asp:TextBox runat="server" ID="txtEmail" />
        <asp:RequiredFieldValidator ID="rfvEmail" runat="server" ControlToValidate="txtEmail" Text="Email is required" SetFocusOnError="true" EnableClientScript="true" />
        <asp:RegularExpressionValidator runat="server" ID="revEmail" Text="Invalid Email" ControlToValidate="txtEmail" SetFocusOnError="true" ValidationExpression="^([0-9a-zA-Z]([-.\w]*[0-9a-zA-Z])*@(([0-9a-zA-Z])+([-\w]*[0-9a-zA-Z])*\.)+[a-zA-Z]{2,9})$" EnableClientScript="true" />
    </div>
</fieldset>
<asp:Button runat="server" ID="btnCreateAccount" CssClass="Next" Text="Create Account" />

Starting from the top, we need jQuery and also qTip to be added to our page.  The interesting JavaScript code in located in ~/Scripts/validators.js.  The rest of the code here is a simple ASP.NET form.  One important thing is that each element to be validated is enclosed in a <div> with his corresponding validators.  This is important because we will use this convention later in our script to find the associated validators for an input control.

I also have to mention that I added some lines in the .skin file of the App_Theme:

Default.skin

<asp:RequiredFieldValidator runat="server" CssClass="ErrorMsg" Display="Dynamic" />
<asp:CustomValidator runat="server" CssClass="ErrorMsg" Display="Dynamic" />
<asp:RangeValidator runat="server" CssClass="ErrorMsg" Display="Dynamic" />
<asp:CompareValidator runat="server" CssClass="ErrorMsg" Display="Dynamic" />
<asp:RegularExpressionValidator runat="server" CssClass="ErrorMsg" Display="Dynamic" />

This will force CssClass=”ErrorMsg” on validators.  This will be used next in our JavaScript code to find the validators:

validator.js

Sys.WebForms.PageRequestManager.getInstance().add_pageLoaded(function () {
    function getValidator() {
        return $(this).parent().find('.ErrorMsg').filter(function () { return $(this).css('display') != 'none'; });
    }
 
    var inputs = '.Validate input:text, .Validate select, .Validate input:password';
 
    var submit = $('input:submit');
 
    var q = $(inputs).qtip({
        position: {
            my: 'center left',
            at: 'center right'
        },
        content: {
            text: function (api) {
                return getValidator.call(this).html();
            }
        },
        show: {
            ready: true,
            event: 'none'
        },
        hide: {
            event: 'none'
        },
        style: {
            classes: 'ui-tooltip-red ui-tooltip-shadow ui-tooltip-rounded'
        },
        events: {
            show: function (event, api) {
                var $this = api.elements.target;
                var validator = getValidator.call($this);
                if (validator.length == 0)
                    event.preventDefault();
            }
        }
    });
 
    if (window.Page_ClientValidate != undefined) {
        function afterValidate() {
            $(inputs).each(function () {
                var validator = getValidator.call(this);
 
                if (validator.length > 0) {
                    var text = validator.html();
 
                    $(this).addClass('Error').qtip('show').qtip('option', 'content.text', text);
//                    validator.hide();
 
                }
                else
                    $(this).removeClass('Error').qtip('hide');
            });
        }
 
        $(inputs).blur(afterValidate);
 
        var oldValidate = Page_ClientValidate;
 
        Page_ClientValidate = function (group) {
            oldValidate(group);
 
            afterValidate.call(this);
 
            submit.removeAttr('disabled');
        }
    }
});

There is much to explain in this code.  First we register a new function to be executed each time there’s an ASP.NET PostBack on the page here: Sys.WebForms.PageRequestManager.getInstance().add_pageLoaded(function () { … });

The function getValidator finds the visible ASP.NET validators associated to a control to be validated.  We use the fact that the control to validate and the validators are contained inside a <div>.

We apply qTip to the inputs to validate and we get the text of the message by finding the visible validators.  Also we have some logic to prevent showing the qTip element if there aren’t any visible validators.

We also do some monkey patching at the end where we inject our own code inside the Page_ClientValidate ASP.NET JavaScript method.  To do that, we simply get a reference to the Page_ClientValidate function, create a new function with our additional code (calling the old Page_ClientValidate) plus we override window.Page_ClientValidate with our new function.  This new function have both the new and old functionality.

You would probably have to modify this code a little bit to fit your needs, but this shows how you could integrate qTip2 for nicer validators in ASP.NET Web Forms.

View demo | Download source

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Microsoft Excel on Multi-Monitor Machines 5 Jun 2012 8:51 AM (12 years ago)

All of the developers at LavaBlast use three monitors; utilizing multiple monitors has significantly increased our efficiency. However, Microsoft Excel doesn’t work particularly well in a multi-monitor setup. By default, every time you open a new Excel file, its contents are displayed within the same instance. You have to manually launch other instances of Excel to have one instance per monitor, which is time consuming.

It is possible to configure Microsoft Excel to load one Window per file, but it involves a number of obscure configuration settings & registry changes. Every time we move to a new machine, this configuration needs to be redone. The information is spread out on a number of sites/forums and it takes a while to re-discover the sources. his post aims at centralizing this information.

In particular, this post focuses on Microsoft Excel 2010 on Windows 7 64-bit. I believe the fix works on other versions as well; feel free to comment on this blog post if the steps are different.

Step 1) Force Excel To Open Multiple Windows

Excel 2010:

Excel 2007:

Once this change is done, every time you double click on an Excel file in Windows Explorer, a new instance of Excel will open. However, you’ll probably encounter the following error.

Step 2) Fixing “There was a problem sending the command to the program”

Each Excel file you open from Windows Explorer now launches in its own separate window. However, Excel spits out “There was a problem sending the command to the program” and leaves the Excel window blank.  You can drag & drop your existing file to this window to open it, but this is still painful. We will need to change the system registry to solve this issue; please refrain from doing this is you are not comfortable with the reg edit tool.

  1. Launch regedit
  2. Rename the HKEY_CLASSES_ROOT\Excel.Sheet.8\shell\Open\ddeexec  key to HKEY_CLASSES_ROOT\Excel.Sheet.8\shell\Open\ddeexec.bak
  3. Edit HKEY_CLASSES_ROOT\Excel.Sheet.8\shell\Open\command\(Default).  Change /dde to “%1” in the value.
  4. As an example, mine was from "C:\Program Files (x86)\Microsoft Office\Office14\EXCEL.EXE" /dde to "C:\Program Files (x86)\Microsoft Office\Office14\EXCEL.EXE" "%1"
  5. Edit HKEY_CLASSES_ROOT\Excel.Sheet.8\shell\Open\command\command. Change /dde to “%1” in the value.
  6. As an example, mine was from ykG^V5!!!!!!!!!MKKSkEXCELFiles>VijqBof(Y8'w!FId1gLQ /dde to ykG^V5!!!!!!!!!MKKSkEXCELFiles>VijqBof(Y8'w!FId1gLQ "%1"
  7. Rename the HKEY_CLASSES_ROOT\Excel.Sheet.12\shell\Open\ddeexec key to HKEY_CLASSES_ROOT\Excel.Sheet.12\shell\Open\ddeexec.bak
  8. Edit HKEY_CLASSES_ROOT\Excel.Sheet.12\shell\Open\command\(Default). Change /dde to “%1” in the value.
  9. Edit HKEY_CLASSES_ROOT\Excel.Sheet.12\shell\Open\command\command. Change /dde to “%1” in the value.

 

Excel should now load separate Windows for each file you open. This setup will consume more memory, but will vastly increase your productivity.

Troubleshooting note:

Thanks to Turbo2001rt  for the final important tweaks.

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

FranchiseBlast Wins Bootstrap Award 27 Feb 2012 6:07 AM (13 years ago)

FranchiseBlast Wins Bootstrap Award

We’re proud to announce that 2012 is off to a great start! We’ve recently received lots of local recognition and thought we’d share this great news with you.

First, we’ve been listed as a Startup To Watch for 2012 by the Ottawa Business Journal. Past nominees (Chide.it, FaveQuest, Select Start Studios and PatientWay to name a few) have had a tremendous impact on the Ottawa-Gatineau startup community  and we strive to do the same. For decades, our region has featured a tremendous wealth of engineering talent and we’re proud to be a part of the group of companies rebuilding our digital economy. 

Second, we’ve won a Bootstrap Award for Best Sales/Value Proposition. This award recognizes companies who’ve grown their companies without the use of external funding (such as venture capital). We’ve been growing organically since our creation in 2007 and bootstrapping has enabled us to focus on creating value for our customers from day one. Today, we have an awesome product that is a perfect fit for our target market. If we had to name a single element which helped us refine our value proposition (other than listening to our customers for five years), I would have to name Lead To Win.

Lead To Win is a startup ecosystem/accelerator (which takes no equity)  which helps companies get to market faster and/or accelerate their growth. We strongly recommend the program to other high-tech entrepreneurs, especially engineering students who don’t have a background in business.

Thank you to everyone who’s vouched for us over the years. 2012 will be a year of great growth for us and we hope to share more good news soon!

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

FranchiseBlast Now Member of the CFA and CQF 17 Feb 2012 7:25 AM (13 years ago)

LavaBlast Software Inc. (creator of FranchiseBlast) is proud to announce that it is now a member of both the CFA (Canadian Franchise Association) and the CQF (Conseil Québécois de la Franchise / Quebec Franchise Association). Over the past five years, we’ve helped numerous franchises grow thanks to improved operational software and we feel the time is now ripe to get involved in these franchise associations. We hope to have the pleasure to meet you at one of the upcoming CFA or CQF events, such as the CFA’s National Convention in April 2012.

franchiseblast     CFA      cqf

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

New Grant for Canadian Franchises to Adopt Tech 15 Nov 2011 7:18 AM (13 years ago)

(From left to right) Jason Kealey (President, LavaBlast Software), The Honourable Christian Paradis (Minister of Industry) Yesterday, the Minister of Industry announced a new grant pilot program (DTAPP) offering up to $99,999 in financial support to Canadian small- and medium-sized enterprises (SMEs) to facilitate the adoption of digital technologies. The announcement featured FranchiseBlast as an example of such a digital technology and was made inside one of the Boomerang Kids stores, our newest franchise client (see photo).

This pilot program is great news for Canadian franchises as it includes the adoption of business systems (franchise management, customer/work order management, inventory management, etc.). In the context of a franchise, these are often customized systems ensuring the uniformity of their proprietary business processes across all franchisees. Off-the-shelf hardware and software are not covered by this grant, but the following are:

The new grant program is managed by NRC-IRAP. As with all NRC-IRAP grants, the process starts with the franchisor developing a relationship with an Industrial Technology Advisor (ITA). Over 240 ITAs, located all over Canada, will work with you to determine the best course of action for your business, whether is be via the new Digital Technology Adoption Pilot Program (DTAPP) or one of the numerous existing grant program­s.

As our specialty is creating franchise-specific software solutions, we’ve gone through the process in the past. Our team can work with both you and your ITA to establish the scope and requirements for your project.

For more information about DTAPP, please visit this site and call toll-free 1-855-453-3940 to be assigned an ITA in your area. 

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

LavaBlast and Boomerang Kids: When helping local families meets the Cloud 14 Nov 2011 4:17 PM (13 years ago)

(From left to right): Jason Kealey (President LavaBlast Software Inc.), Honourable Christian Paradis (Minister of Industry), Bogdan Ciobanu (Director General NRC-IRAP), Lynne Plante (Directrice NRC-IRAP), Heather Meek (co-owner, Boomerang Kids Consignment Shops) LavaBlast, a leading provider of cloud-based franchise management solutions, announced today the deployment of its flagship product, FranchiseBlast, to the first of four Boomerang Kids locations. This state of the art software solution enables Boomerang Kids to grow their consignment franchise nationwide while allowing local families to shop smarter.

"Using the FranchiseBlast system will allow employees to focus more on helping local families," said Heather Meek, owner of Boomerang Kids. "We are expanding our franchise throughout Canada and we want to ensure the success of our current and future franchisees. FranchiseBlast will allow us to offer a complete easy-to-use system that helps store owners, employees and their customers. And now, I can even manage my business on my iPad!"

The FranchiseBlast deployment consists of an integrated suite of local and cloud-based tools that allow Boomerang Kids to automate the management recipes they’ve perfected throughout the years and replicate them in a franchise environment. FranchiseBlast will boost Boomerang Kids’ efficiency and customer service with:

"We are excited to be powering the expansion of a local franchise. Boomerang Kids has a solid management team and now has the tools to support its upcoming rapid growth." said Jason Kealey, President of LavaBlast. "This collaboration strengthens our position in the Franchise Management market and has allowed us to bring on new team members and scale up our operations."

image

About Boomerang Kids:

At Boomerang Kids, families can help the planet and their wallet through reuse and recycling of kids clothing and equipment. Parents bring the items into the store and Boomerang Kids will take care of verifying quality, selling and, best of all, sharing profits. The concept is extremely popular and independent of the economic climate. From their four initial locations in the Ottawa region, Boomerang Kids is now expanding Canada-wide via franchising.

image

About LavaBlast Software Inc.:

LavaBlast produces state of the art software solutions for the franchise industry and has played an integral part in the growth of numerous franchises, both in Canada and globally. By migrating to FranchiseBlast, franchisors reap the benefits of a turn-key software solution for their franchisees and LavaBlast’s deep software engineering skills to adapt their franchise in a rapidly changing technological environment.

image

About our flagship product, FranchiseBlast:

FranchiseBlast empowers you to run a successful franchise business with easy-to-use operational software. Manage day-to-day issues with franchisees, see everything happening in real-time and increase the level of control you have over your franchise business.

Download this press release (PDF format).

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

LavaBlast POS v4.0.0 6 Sep 2011 10:49 AM (13 years ago)

We’re just about to release the version 4.0.0 of our franchise point of sale system. One of the most noteworthy change is the fact we’ve given the look & feel a major overhaul, thanks to jQuery Mobile which we’ve blogged about previously. We thought we’d take a minute to share with you what makes it so special!

First off, I’ve recorded a short video featuring a variation of our franchise POS for the Teddy Mountain franchise. Teddy Mountain provides the stuff your own teddy bear experience to children worldwide and have been using our POS since 2006.

 

As you’ll see, I focus on a few of our differentiators in the point of sale space. We’re not a point of sale company and our POS is not conventional: we’re a franchise software company and we’ve created the best point of sale system for a franchise environment.

We bake in a franchise’s unique business processes into the point of sale, making it very powerful while still extremely easy to use. By integrating our point of sale with FranchiseBlast, we’ve also eliminated dozens of standardization/uniformity issues which face small retail chains or franchises.

Furthermore, we’ve given additional focus to cross-browser compatibility in this release as our POS is not only used regular POS hardware (in brick & mortar stores) but also on the Apple iPad for back office operations an for managing the warehouses that feed our franchise e-commerce websites.  We’re definitely excited by the potential tablets have for both retail and service-based franchises! Expect more news from us in this space soon!

In the meantime, if you know of small chains / new franchises which want to explore disruptive technologies in their locations, we hope you’ll point them in our direction!

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Gotcha: Reporting Services Viewer bugs on Google Chrome 28 Jun 2011 8:09 AM (13 years ago)

We include the ASP.NET ReportViewer which comes with Microsoft SQL Reporting Services inside some of our applications. Simply put, it generates a web-based version of the report and can easily be integrated within a website. However, the ReportViewer has been plagued with numerous cross-browser compatibility bugs over the years. Some have been fixed, while others remain. Recently, we’ve had the following issues:

   1:  <div style="background-color: White; width: 950px" id="rpt-container">
   2:      <rsweb:ReportViewer ID="ReportViewer1" runat="server" Font-Names="Times New Roman"
   3:          Font-Size="8pt" Height="700px" Width="950px" ShowExportControls="true" ShowPrintButton="false" 
   4:          ShowRefreshButton="false" ShowZoomControl="false" SkinID="" AsyncRendering="true"
   5:          ShowBackButton="false">
   6:          <LocalReport ReportPath="contract.rdlc"
   7:              DisplayName="Contract">
   8:          </LocalReport>
   9:      </rsweb:ReportViewer>
  10:  </div>
  11:  <asp:ScriptManagerProxy ID="proxy" runat="server">
  12:      <Scripts>
  13:          <asp:ScriptReference Path="~/js/fixReportViewer.js" />
  14:      </Scripts>
  15:  </asp:ScriptManagerProxy>

The fixes we found on other websites (setting the display to inline-block on the included tables) only worked for the first load – as soon as the report changed due to AsyncRendering=”true”, the toolbars were broken again. This was fixed by replacing jQuery’s ready function with Microsoft ASP.NET Ajax’s PageLoaded function.

We also noticed that these fixes also broke down our width & height. We pinpointed the issue to the generated HTML table with the id ending with fixedTable, which needed to be left as display table instead of inline-block. We thus adapted the JavaScript.

The HTML wraps the ReportViewer with a div, mostly for convenience (to avoid peppering our code with <%= ReportViewer1.ClientID %>). Furthermore, if my memory serves me well, we set the background-color manually because some browsers made the ReportViewer transparent.

Hope this helps! If you find more elegant ways of doing this, or know of more gotchas, please let us know!

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?

Using Microsoft POS for .NET in 2011 6 Jun 2011 5:41 AM (13 years ago)

Five years ago, we decided to utilize Microsoft’s Point Of Service for .NET (POS for .NET) in our point of sale (POS) to integrate with the various peripherals used by POS systems. Simply put, POS for .NET enables developers to utilize receipt printers, cash drawers, barcode scanners, magnetic stripe readers (MSR), line displays (and many other peripherals) within their .NET applications. Back then, the .NET framework was at version 2.0. Obviously, many things have changed since then with the advent of .NET 3.0, 3.5 and, more recently, 4.0. However, the latest version of POS for .NET’s is v1.12 and it was released in 2008.

Being forward-thinking as we are, we structured our point of sale as a web application from day one, to enable future deployment scenarios (being browser-based means we can easily use our point of sale on the iPad or any other hot hardware platform) and code-reuse within our e-commerce application and FranchiseBlast. However, this made it a bit harder on us to integrate with the peripherals as we weren’t using them in the traditional context of a desktop application (especially access Windows printers from a server-side web application). However, we solved those issues many years ago and have continued to evolve the solution ever since.

Fast forward to 2011: POS for .NET has not been refreshed in three years, we’ve moved to 64-bit machines and .NET 4.0. This blog post is a collection of tips & tricks for issues commonly faced by .NET developers working with POS for .NET in 2011.

Common Control Objects – don’t forget about them!

This is just a reminder, as this was true back in 2006 too. You’d typically expect to be able to install the peripheral’s driver and then utilize it within your .NET application. However, you also need to install intermediary Common Control Objects.  I always end up downloading the CCOs from here.  I always forget the proper order and sometimes run into trouble because of this and end up having to uninstall and reinstall half a dozen times until it works (… pleasant…). I believe this is the installation order I use (you may need to reboot between each step).

  1. Install Epson OPOS ADK
  2. Install other drivers (scanners, etc.)
  3. Install the Common Control Objects
  4. Define logical device names (LDN) using Epson OPOS
  5. Install POS for .NET 

 

POS for .NET doesn’t work in 64-bit

Long story short, due to the legacy hardware it supports, POS for .NET only works in 32-bit. If you’re running an app on a 64-bit machine, it will fail with a cryptic error message or will simply be unable to find your peripherals. Example:

System.Runtime.InteropServices.COMException (0x80040154): Retrieving the COM class factory for component with CLSID {CCB90102-B81E-11D2-AB74-0040054C3719} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).

You can still use the peripherals on 64-bit operating systems, but you will need to compile your desktop application as 32-bit (Right click on your project –> Build –> Platform target: x86). You even need to do this with the example application that comes with POS for .NET (in C:\Program Files (x86)\Microsoft Point Of Service\SDK\Samples\Sample Application) if you want to use it.

You’ll probably run into the same issues with all the .NET test applications supplied by the device manufacturers. Unless you can manage to find an updated sample, you’ll have to work your magic with a decompiler. In addition to probably being illegal, it is a pain and a half. Therefore, you’re better off using the test application that comes with POS for .NET.

As for web applications, you need to force IIS to run your application in a 32-bit application pool.

POS for .NET doesn’t work in .NET 4.0

Another bad surprise is migrating your application to .NET 4.0 and then realizing the POS hardware stops working. Long story short, you’ll get this error:

This method explicitly uses CAS policy, which has been obsoleted by the .NET Framework. In order to enable CAS policy for compatibility reasons, please use the NetFx40_LegacySecurityPolicy configuration switch. Please see http://go.microsoft.com/fwlink/?LinkID=155570

The error message is fairly self-explanatory. Microsoft stopped supporting '”Code Access Security”, which is internally used by POS for .NET. You can either turn on a configuration option that re-enables the legacy CAS model or wait for Microsoft to release a new version of POS for .NET.  We’ve been told not to hold our breath, so the configuration option is the preferred flag. 

If you’re creating a desktop application, the solution is in the error message – more details here.  Add this to your app.config:

<configuration>
   <runtime>
      <NetFx40_LegacySecurityPolicy enabled="true"/>
   </runtime>
</configuration>

 

If you’re creating a web application, the flag is a bit different. Add this to your web.config:

<configuration>
    <system.web>
      <trust legacyCasModel="true"/>
   </system.web>
</configuration>

POS for .NET doesn’t work with ASP.NET MVC / dynamic data/operations

The above flag will cause your legacy code to run properly on .NET 4.0 but it does have a side-effect. You will not be able to use some of the newer .NET framework features such as the dynamic keyword. Not only can you not use it explicitly within your own code, but ASP.NET MVC 3 uses it internally within the ViewBag.

Dynamic operations can only be performed in homogenous AppDomain.

Thus, you have to choose between POS for .NET or ASP.NET MVC 3, unless you load up your POS objects in another AppDomain. Here’s some sample code to help you do that.

You need to be able to create another AppDomain and specify that this AppDomain should use the NetFx40_LegacySecurityPolicy option, even if your current AppDomain doesn’t have this flag enabled.

   1:  var curr = AppDomain.CurrentDomain.SetupInformation;
   2:  var info = new AppDomainSetup()
   3:  {
   4:      ApplicationBase = curr.ApplicationBase,
   5:      LoaderOptimization = curr.LoaderOptimization,
   6:      ConfigurationFile = curr.ConfigurationFile,
   7:  };
   8:  info.SetCompatibilitySwitches(new[] { "NetFx40_LegacySecurityPolicy" });
   9:   
  10:  return AppDomain.CreateDomain("POS Hardware AppDomain", null , info);

 

You can then use this AppDomain to create your POS peripherals. All our peripherals extend our own custom PosHardware base class with a few standard methods such as FindAndOpenDevice(), so we use the following code. For testing purposes, we created a configuration option (IsHardwareLibInSameAppDomain) to toggle between loading in the current AppDomain versus a separate one.

   1:  private T Build<T>(string id) where T : PosHardware, new()
   2:  {
   3:      T hardware = null;
   4:      if (IsHardwareLibInSameAppDomain)
   5:          hardware = new T();
   6:      else
   7:          hardware = (T)OtherAppDomain.CreateInstanceFromAndUnwrap(Assembly.GetAssembly(typeof(T)).Location, typeof(T).FullName);
   8:   
   9:      if (!string.IsNullOrEmpty(id))
  10:          hardware.DeviceName = id;
  11:      hardware.FindAndOpenDevice();
  12:      return hardware;
  13:  }

 

Also, don’t forget to mark your classes as Serializable and MarshalByRefObject.

   1:  [Serializable]
   2:  public abstract class PosHardware : MarshalByRefObject

 

Working with objects in other AppDomains is a pain.  Any object that you pass between the two app domains (such as parameters to functions or return values) must be marked as Serializable and extend MarshalByRefObject if you wish to avoid surprises.  If you marshal by value instead, you will be working on read-only copies of (which may or may not be desirable, depending on your context.)

Conclusion

It only took three years without a new release before POS for .NET started being a pain to work with – unless you stick with past technologies. With the advice provided here, however, you should be able to move forward without issue. Did you discover any other gotchas with POS for .NET?

Add post to Blinklist Add post to Blogmarks Add post to del.icio.us Digg this! Add post to My Web 2.0 Add post to Newsvine Add post to Reddit Add post to Simpy Who's linking to this post?