Web Design Tips

12 Web Design Tutorials For Novices To Expert Developers In 2022
Creating website designs that have fashionable touches and developments built-in into the design will help your initiatives stand out. Keep reading websites like this and take observe what different designers are doing. Take observe of colors and kinds and options that are included in the websites you visit regularly. There are a ton of courses out there – in-person and online – so that you can learn website design fundamentals. Start with a neighborhood faculty or online studying hubs such as Udemy or Coursera.
The greatest approach to get inventive ideas is to look at the work of other designers. Start browsing sites like Behance, Dribbble, and Pinterest. Websites for manufacturers like Apple, Anthropologie, and Swatch are nice for inspiration.
The white area gives web page elements—and by extension those viewing the page—room to breathe. Cluttered designs feel uncomfortable, so add white house to relieve that pressure. The JPG file format is ideal for photographic and high-resolution photographs.
Now that you are conversant in the key components of fine web design, go ahead and begin creating. When you’re only just beginning your website design career, you may find it difficult to leap straight right into a project. Just thinking about all the work that’s to come back can be overwhelming so you find yourself not understanding how or the place to begin. To kick yourself back into gear and be simpler, we’d recommend you explore the works of other designers. You won’t copy their tasks, in fact, however, their creativity is prone to ignite yours. While you browse their portfolios, ask yourself what it’s that you just like about their work.
PicMonkeyis your go-to free online software if you work with pictures in any capability. It doesn’t have as many templates and free design components as Canva, but it’s a strong and fast way to do basic picture editing and design without having to obtain a software program. It’s attention-grabbing to know that shade usage in a graphic design impacts the audience differently.
There are a quantity of JavaScript features that may allow you to run capabilities repeatedly, several instances a second, the best one for our purposes here being window.requestAnimationFrame(). It takes one parameter — the name of the operation you need to run for every frame. The next time the browser is ready to update the display, your operation will get called. If that operation attracts the brand new update to your animation, then calls requestAnimationFrame() again just before the top of the function, the animation loop will proceed to run.
First of all, add the next helper function to the underside of your code. This converts degree values to radians, which is helpful because each time you should provide an angle worth in JavaScript, it’s going to nearly always be in radians, but humans usually assume in degrees. stroke() — draw an overview shape by drawing a stroke along the trail you’ve got drawn thus far. First of all, take a replica of your newly coded canvas template (or make an area copy of 1_canvas_template.html should you did not observe the above steps).
Also, as opposed to discussing just the theory of design, these programs tend to focus more on practical software. This eight hours on-demand video course is complete information for learning Adobe Photoshop and is nice for beginners and intermediate learners alike. And while you’re bettering your graphic design skills, remember to create your graphic design portfolio website to showcase your work. Beginners can study graphic design in many various methods. There are brief or long run courses obtainable both in a particular person or online.

Graphic Design Logo Concepts
It is a trademark that leaves an impression within the minds of your customers every time they think about you. We have probably the most talented and skilled logo designers who take a tremendous approach to every emblem design project to inspire shoppers. Many instances when we can not say one thing with the words we take the assistance of images and indicators to make others understand us. The language of symbols is utilized in ancient’s times as properly but right here we might speak about one of the fields of the visible arts that is graphic designing. Using completely different digital tools on a pc a designer makes an effort to supply the most effective graphic designing services. Unlimited graphic design providers price about $400 per 30 days for typical beginning plans.
Combining forward-thinking concepts with a sensible design strategy to help purchasers construct their model with thoughtfulness and authenticity. Clay is a UI/UX, net design, and branding agency based mostly in San Francisco. They create world-class digital products, websites, and brands for startups and Fortune hundred enterprises. We consult, advise, create, design, plan, elaborate, tweak, and measure to make sure we ship on our guarantees and present our clients in the most effective gentle each time.
Other purchasers are manufacturers looking for recent patterns for textiles, wallpaper, and residential goods. My designs have appeared on products carried by Nordstrom, Target, Land of Nod, Zappos, Modcloth, and different retailers. Together, let’s create a design in your model that makes you proud.
WebClues Global is a famed Web & Mobile App Development Company. Our staff consists of extremely skilled designers and builders in Mobile App Development, E-commerce Web Development, UI/UX Design, CMS, Marketing Services, and so on. We create online experiences that help companies 2-3x income within 18 months. We CREATE daring and exquisite manufacturers then we assist them ACTIVATE with strategic development services. Your eCommerce Marketing Agency, We assist manufacturers to rework potential ideas into worthwhile realities. Based on your finances, timeline, and specifications Clutch can connect you directly with companies that match your project needs.
Siegel+GaleSiegel+Gale is a worldwide brand technique, design, and expertise agency that sets itself apart from different graphic design companies by its core perception of “simplicity”. Imagine is an artistic company in NYC that focuses on sustainable packaging design, model identity creation, and purpose-driven brand positioning. We assist brands, giant and small, to attach emotionally with customers by way of revolutionary solutions that showcase the corporate’s function and are imaginative and prescient. Jelvix is a know-how partner that provides custom software growth services for different companies worldwide. Our ambition is demonstrated by quite a few circumstances of successful digital transformation, unique enterprise engineering, design, and high-quality expertise consulting companies. Maximize is a results-driven internet design and digital advertising company that customized tailors the best methods that can assist you to obtain your goals.
We improve your customer loyalty and sales by researching and ultimately using our creativity to place the method in which the public views your brand. Launch use technique and marketing plan to ensure a measurable return on investment is achieved. We take a glance at budgets, demographics, opponents and work with our purchasers in creating a plan that fits their enterprise and can obtain the targets set.
On the about web page, they list a complete bunch of other capabilities other than pure graphic design. This is the one major design agency where the homeowners of the enterprise are the creators of the work and function as the primary contact for every consumer. Although the CGH design studio is small (only one location – NY), it was founded means back in 1958 and since then they’ve managed to work for some of the biggest clients within the World. Custom-designed infographics tap the “optic nerve” and are much more impactful than plain text.
Depending on the work that needs to be done, the graphic design studio you hire might ask you to provide different sorts of materials. Creating a surprising visual design that accomplishes a goal is a difficult task that requires loads of group and communication. You have to really feel comfy with how a selected design agency will strategy your model, perceive your aims, and translate these into a finished product.
Whether you might have a graphic designer on staff who wants help or you are a one-person show dealing with the creative on your own, Design Pickle may help you get to outcomes faster. If you’re getting an irrelevant result, try a more slender and specific time period. Designers know the importance of a great first impression, which is why graphic designer logos are a few of the most creative and compelling logos out there. At a look, the best logo builds belief in your design abilities and sets you other from the competition. Here are the highest design providers providing limitless graphic design in 2021. This type of on-demand service is one of the newest graphic design developments and is great for startups, bloggers, companies, and small companies.

AWS announces a no-code mobile and web app builder
Amazon Web Services (AWS) has launched a no-code mobile and web app development tool Amazon Honeycode.
A fully managed service, Amazon Honeycode is a visual application builder that can be used by customers to create applications backed by an AWS-built database, which the company says allows customers to easily filter, sort, and link data together thereby helping them create interactive and data-driven applications.
Amazon Honeycode features pre-built templates, where the data model, business logic, and applications, such as time-off reporting and inventory management, are pre-defined and ready-to-use. App developers can also import data into a blank workbook, use the familiar spreadsheet interface to define the data model, and design the application screens with objects like lists, buttons, and input fields.
In addition, Amazon Honeycode provides builders with an option to layer automation on to their applications to drive notifications, reminders, approvals, and other actions. Once the application is built, customers can simply click a button to share it with team members. Customers can build applications with up to 20 users for free, and only pay for the users and storage for larger applications, the company said.
“Customers have told us that the need for custom applications far outstrips the capacity of developers to create them,” said Larry Augustin, vice-president, AWS, in the June 24 announcement.
This imbalance between demand and supply is helping drive the citizen developer, low/no-code movement, Terry Simpson, a technical evangelist at workflow automation firm Nintex, told IT Business in an interview last year.
Related:
The applications that users build using Amazon Honeycode can range in complexity from a task-tracking application for a small team to a project management system that manages a complex workflow for multiple teams or departments.
“With low-code moving into the AWS Cloud stack, a new era of lightweight cloud innovation is emerging,” Dion Hinchcliffe, vice-president and principal analyst at Constellation Research, tweeted today.
#AWS launches @Amazon #Honeycode, a #nocode app building service https://t.co/g6Nlxg41q8
With #lowcode moving into the @awscloud stack, a new era of lightweight #cloud #innovation is emerging. pic.twitter.com/erKyjl4avA— Dion Hinchcliffe (@dhinchcliffe) June 25, 2020
Channel-based messaging platform Slack and paid image sharing, image hosting service, and online video platform SmugMug are among the first few customers planning to use Amazon Honeycode, the company said.
“We’re excited about the opportunity that Amazon Honeycode creates for teams to build apps to drive and adapt to today’s ever-changing business landscape,” said Brad Armstrong, vice-president of business and corporate development, Slack, in a press release. “We see Amazon Honeycode as a great complement and extension to Slack and are excited about the opportunity to work together to create ways for our joint customers to work more efficiently and to do more with their data than ever before.”
Amazon Honeycode is currently available in beta form in one AWS region – US West (Oregon) – with more regions coming soon, the company said.
By 2024, low-code application development will be responsible for more than 65 per cent of application development activity, according to a recent Magic Quadrant for Enterprise Low-Code Application Platforms report by Gartner.
Other no-code/low-code app development platforms include Google App Maker and Zoho Creator.

4 Ideas For Creating Awesome Blog Names
Your blog’s name can have a tremendous impact that’s seen in the entire lifetime of your online business. A blog name needs to be memorable so that people can find it once they leave your page. It has to be unique so that it’s easy to differentiate from your competitors.
Your blog name may also send SEO signals that contribute to how well you rank. Also, it can tell people who you are and what you offer when visitors haven’t even clicked on the domain link.
The impact of your blog name is virtually endless. It will show up on social media, in ads, and even your brand personality will take shape from the name you choose.
With so many things to consider, it’s quite clear that you need to put considerable thought into picking a blog name. It’s not as easy as picking something you like, you also need to be strategic about it.
In this post, we’re going to explore several ideas that can inspire you to select the right blog name for your business. We’ll look at ways you can build a blog name based on your solution or audience. And we’ll explore tools that can make the process easier. Let’s get started!
Feature the solution in the name
There are several options for naming your business. Right now, let’s consider building a blog name based on your product or service.
Here’s an example: a cake decorating business changed its name from Paul Bradford Designer Cakes to CakeFlix. You can already see the advantages that a name choice like this can create.
People have a good idea about what the business does from the word ‘Cake’ in the name. ‘Flix’ also conveys that it’s a video-based business that’s an online course or a set of tutorials and videos.
Here’s another example: The Blog Millionaire – a blog name that’s attractive because of the word millionaire, while conveying that it has to do with blogging and earning money.
Featuring your product or solution in the name acts an instant message to your audience. It tells them what you’re all about and can lead to better click-through rates, an important consideration for any business.
Create a unique and unrelated name
Another way to go is to create a blog name that sounds unique and may not have any relationship to your content theme. Here’s an example, a site for web development and design called Smashing Magazine. From the name, you would guess that it’s a general publication covering popular topics related to news and lifestyle. However, the content is all about designing websites. It still does well enough that it appears on the second page when searching for ‘WordPress blog’ on Google.
The downside of choosing names like this is that you have to harder for effective keyword and content marketing strategies. And doing this with a more relevant name is already hard enough. Nevertheless, with great content comes great recognition at its heels. Think about ‘The Onion’ and its enormous success even though the name seems to be virtually random.
Note that the choice of the name here, however, is well-chosen and a subtle hint at satire. There’s enough wordplay taking place to hint at interesting content which is then backed up with well-written humorous and satirical content.
Use your name
Making your blog name after your actual name is a great approach if you want to build a personal brand. It also makes sense if you have a recognizable name in your niche or industry. Very often, business leaders create a blog named after themselves as an authentic source of thought leadership content. Here’s an example by the creator of WPBeginner, Syed Balkhi.
If you’re still building your personal brand, then it’s important to be active in networking events and to create fresh and thought-provoking content on a regular basis. Another strategy to help you grow your blog is to guest post on other publications.
Use tools to help create a blog name
Choosing a blog name is never as simple as picking something you like. You need to make sure that the domain name for it is available. Many short domain names are likely taken and if you want to acquire it, it can be unfeasibly expensive. Using online tools will help you quickly learn whether a domain name is available to you. You can also come up with interesting names based on your parameters.
Here are some awesome tools to help you create and pick your blog’s name.
Name combiner tools: Another approach to making blog names is to combine different but relevant words. We already saw an example earlier, that of CakeFlix. The name ‘WordPress’ and ‘FitBit’ are also examples of names that are combinations of words. Use an online name combining tool for free to help you come up with all possible word combinations. You’re certain to find something great that works for you.
Name generator tools: An easy option is to simply find a blog name generator tool. You won’t just get great blog names, you’ll also learn whether a domain name is available. Some name generators are useful since they have options for hosting and domain plans. If you’re new to blogging or have a smaller budget, it’s a good idea to use these opportunities to set up your blog for a lower price.
Choose the right name blog name
The impact of your chosen blog name runs throughout the lifetime of your blogging business. It’s not a choice to be made lightly when your content marketing, advertising, and other activities will grow based on the blog’s name.
In this post, we’ve looked at some neat examples of great blog names. We’ve also checked out two useful tools that will help you get a great name and maybe even set up your blog for a great deal.
You have helpful ideas to choose your own blog name, so, go ahead and get started on your blogging business.

Club Quarantäne
Clubs are a vital space for connection, pleasure and release—while we can’t physically get together, and dance and sweat in euphoria, we ask you to join us online in the name of solidarity and compassion. Stay home, stay safe and let’s dance together.
Club Quarantäne is an international online club—the website functions as the venue, offering people a place to interact with each other virtually, providing the connection that’s lacking from our daily lives right now, and looking forward to a future when we will dance side by side again.
Club Quarantäne is not organised or funded by RA. You can support the team behind the project on their journey to sustainably hosting more editions, by buying a ticket here. Donations for this event will go towards Sea Watch and a collection of bail funds and racial justice orgs on the website, while there is a fundraiser for the LGBTQ+ focussed charity Center For Black Equity happening on the YouTube stream.
House Rules
Racism, homophobia, transphobia and sexism are not acceptable in any club and will not be allowed as part of this virtual experience. No hate speech, bullying, trolling or hostility. Please respect each other.
Set times (CEST)
22: 00 I$A
23: 30 Stenny
01: 00 Lucy
02: 30 KI/KI
04: 00 Somewhen
05: 30 Peach
07: 00 THC
08: 30 Analog Soul
10: 00 DEBONAIR
11: 30 Bergsonist
13: 00 Upsammy B2B Oceanic
14: 30 Russell E.L. Butler
16: 00 Hodge
17: 30 Dixon
19: 00 rRoxymore
20: 30 Nazira
22: 00 Freddy K
23: 30 Rifts
01: 00 Rødhåd
02: 30 Hector Oaks
04: 00 Aurora Halal
05: 30 Paal
07: 00 IMOGEN
08: 30 Palms Trax
Club Quarantäne is brought to you by
24 incredible DJs.
Invisible Hand (Concept and Project Management)
Sam Aldridge / Multisex (Visual Director & Coordinator)
Maximilian Kreis, Jannis Szeder, Clifford Kent Sage (Visual Artist)
Marco Land, Florian Zia & Vincenz Aubry (Web Design & Development)
Jung & Dynamisch, 200kilo (Graphics)
Dario Damme / Multisex (Creative Assistant)
Countersubject (Mastering)
Abcdinamo (Font)
Toyah Siegel (Fundraising)
Off World Live (Interactive live-stream delivery)
Ungroup (Web Design & Development)
Resident Advisor
YouTube
Typeface
ABC Viafont by Dinamo

Cheapskate’s Journey to On-Demand Load Tests on Heroku With Locust
Over a million developers have joined DZone.
{{totalResults}} search results

Cheapskate’s Journey to On-Demand Load Tests on Heroku With Locust
I want to stretch every dollar that I spend on the cloud. I run a handful of web applications on Heroku, and like everyone else, run a suite of smoke tests and load tests on every release increment in a non-production environment. Load tests are important: they help us not only to understand the limits of our systems but also bring up issues that arise due to concurrency, which often escape the realms of unit tests and integration tests. But since we run the tests often, we don’t want to pay a lot of money every time the tests run.
In this article, I’ll show you how to set up cost-effective load tests. We’ll use Locust to make the testing robust and Heroku to make running the tests easy and cost-effective. I’ll also show how you can use VS Code and Docker for development without installing dev dependencies on your system.
What Is Locust?
Locust is an open-source load testing tool written in Python. Locust tests can be distributed over multiple machines to simulate millions of users simultaneously, helping to determine just how many users your site or system can handle.
Locust was created to address issues that exist with two other leading solutions — JMeter and Tsung. Specifically, it was built to address the following limitations:
- Concurrency: JMeter is thread bound, creating a new thread for every user. This severely limits the number of users that can be simulated per machine. Locust, on the other hand, is event-based and can simulate thousands of users on one process.
- Ease of Coding: JMeter requires complicated callbacks. Tsung uses an XML-based DSL to define user behavior. Both are difficult to code. Locust scenarios, on the other hand, are written in plain Python and are easy to code.
Terminology
First, a little terminology. With Locust, you write user behavior tests in a set of locustfiles, and then execute the locustfiles concurrently on the target application. In terms of Locust, a collection of locust users (collectively called a Swarm, and individually called a Locust) will attack the target application and record the results. Each locust executes inside its sandboxed process called Greenlet.
Considerations
Before proceeding further, I recommend that you read the guidance from Heroku on load tests, which lists the restrictions that apply and the consequences. The guidance in this article is limited to executing low to medium level tests (less than 10,000 requests per second). For executing high-scale tests, you should either contact Heroku support first to ensure your systems are pre-warmed and will scale appropriately, or use Private Spaces to host your testbed (application under test and the test platform).
For high-volume load tests, I recommend modeling your test setup on this sample application repository. For the latest pricing details and to estimate the cost of running your applications on Heroku, refer to the Heroku website.
Prerequisites
Here is the list of tools and cloud services that I used to build the sample application. My development machine runs Windows 10 Professional, however, the following tools are available on Mac as well.
- VS Code with Remote Development extension.
- A Heroku account in which you can create apps on the standard tier.
- A free Microsoft Azure subscription.
- Docker Desktop for Windows (or Mac).
- Heroku CLI.
- Azcopy.
The Applications
The sample application that I have prepared for this demo, which we will refer to as the Target API application, is a REST API written in Go. We also have a second application, which we will refer to as Loadtest application, that contains the load tests written in Python using Locust.
- The Target API application is the REST API that we intend to test. Since the API is required to process HTTP requests, we host it on web dynos.
- The Loadtest application contains our Locust tests. These are split into two categories based on the type of users supported by the Target API application. You can execute the two test suites in parallel or in sequence, thus varying the amount and nature of load that you apply on the Target API application. Since the dynos executing the tests are required only for the duration of test executions, we host them in Heroku’s one-off dynos. The one-off dynos are billed only for the time and resources that they consume, and an Administrator can spawn them using the Heroku CLI tool.
The following is the high-level design diagram of the applications and their components.
High Level Design Diagram – Full Image
Heroku provides ephemeral storage to the application processes executing on the dyno, which may or may not exist. Also, because the storage is local to the process, we cannot access any files generated by the Heroku CLI since it creates another sandboxed process with its own storage on the dyno. Due to access restrictions, the process that generates the files will export them to a durable cloud storage service, or in the case of web dynos, make them available through an HTTP endpoint. By executing Locust with a flag (–csv), you can instruct locust to persist test results in CSV files locally. We use Azcopy, which is a CLI tool used for copying binary data into and out of Azure storage to export the results generated by the Locust tests to an Azure blob storage.
Setting Up the Applications
The source code of the applications is available in my GitHub repository. Source Code
Target API Application
Let’s first dissect the Target API application, which we want to test with our load test suite. Open the folder named api in VS Code. In the file main.go, I have defined three API endpoints:
The behavior of the three endpoints is as follows:
- “/“: Returns an HTTP 200 response with text OK.
- “/volatile“: Returns HTTP 200 response but successively delays the response by one second for every 10 requests.
- “/buggy“: Returns an HTTP 500 fault message for every fifth request.
Remote Development Extension for Debugging
You probably noticed that I did not mention installing Golang or Python as a prerequisite for this application. We will use the Remote Development extension that you installed to VS Code to debug the Target API application. You can read about this extension in detail here. However, in a nutshell, this extension allows you to use a container as your development environment.
The extension searches for a folder named .devcontainer at the root and uses the Dockerfile (the container definition) and devcontainer.json (for container settings) files to create a new container and mount the folder containing your code as a volume to the container. For debugging, the extension attaches the VS Code debugger to the process running in the container. I have already configured the container resources for you, so you just need to press the F1 key to bring up the command window and select the command: Remote-Containers: Open folder in container.
Open Folder In Container – Full Image
When asked which folder to open, select the ‘api’ folder and continue.
Alternatively, you can spawn the command dialog by clicking on the green icon in the bottom left of the VS Code window.
Once the container is ready, press F5 to start debugging the application. You will notice that the text in the bottom left corner of the VS Code window changes to Dev Container: Go to denote that the application is currently executing in a remote container. You can now access the application endpoints from your browser by navigating to http://localhost: 9000.
Executing Application in A Remote Container – Full Image
Loadtest Application
Now we are going to use VS Code to build the test suite inside a container and create a shell script that automates the process of setup and tear down of the test infrastructure. You can use this script to automate the spin up and tear down of the test grid and add it to your CICD pipeline.
1. Launch Loadtest Application Dev Container
In another VS Code instance, open the folder loadtest and launch it in a dev container as well. In this application, you will notice that I created two sets of tests to model the behavior of two user types of the Target API application.
Locustfiles for Test – Full Image
- The user behavior of type ApiUser is recorded in locustfile_scene_1.py. According to the test, a user of type APIUser accesses the default and the volatile endpoints of the Target API application after waiting for five to nine seconds between invocations.
- The user behavior of type AdminUser is recorded in locustfile_scene_2.py. This category of user accesses the default and the buggy endpoints of the Target API application after waiting for five to 15 seconds between invocations.
2. Verify the Tests
To verify the test scripts, execute the following command in the integrated terminal (Ctrl + ~).
Navigate to http://localhost: 8089 to bring up the locust UI. In the form, enter the hostname and port of the Target API application along with the desired locust swarm configurations, and click the button Start Swarming to initiate the tests.
Locust UI – Full Image
3. The Run Shell Script
For executing the locust tests, we need to define a small workflow for each set of tests as follows.
- Execute the test without the web UI on a single worker node for a fixed duration and generate CSV reports of the test results.
- Use Azcopy to copy the test result files to Azure storage. (Of course, you can substitute this part for any cloud storage provider you may use. You would simply need to modify the following script to use a different utility instead of azcopy, and you would be copying to a different storage location.)
The run.sh script in the load test project implements this workflow as follows:
In the previous code listing, after executing the locust command, which produces CSV results, we loop through the CSV files and use the Azcopy utility to upload each file to an Azure storage location—a container named testresult in the locustloadtest.blob.core.windows.net account. You must change these values with the storage account that you created in your Azure subscription.
You can see that this command relies on a Shared Access Secret (SAS) token for authentication, which we applied through an environment variable named SAS_TOKEN. We will add this environment variable to the application later. If you are not familiar with the Azcopy utility, please read more about using Azcopy with SAS tokens here.
Start the Target API Application and Create the Web Dyno
Inside the root directory of each project, API and Loadtest, you will find a file named Procfile.
In the API Procfile, the following command will instruct Heroku to create a web dyno and invoke the command locust-loadtest to launch the application.
web: locust-loadtest
In the Loadtest project, the Procfile for the Locust tests instructs Heroku to create two worker dynos and invoke Run.sh script with appropriate parameters as follows:
worker_scene_1: bash ./run.sh locustfile_scene_1.py scene_1
worker_scene_2: bash ./run.sh locustfile_scene_2.py scene_2
Creating Applications in Heroku
We will now create the two required applications in Heroku.
There are two ways in which you can interact with Heroku: the user interface and the Heroku CLI. I will guide you through a mix of both approaches so that you get some experience with both.
For creating the applications, we will use the Heroku user interface. We will create the Target API application first.
Create Target API Application
In your browser, navigate to https://dashboard.heroku.com/ and click on the New/Create new app button.
Create a New Heroku App – Full Image
On the create app page, enter the name of the application (locust-heroku-target), choose the Common Runtime option, and the desired region. Note that the application name must be unique across all Heroku apps, and so this name may not be available. You can choose your own unique name for this application (and the test engine application lower down), making sure to reference these new names in all subsequent code and commands. If your customers are present in multiple geographies, you can create an additional test bed in a different location and test the performance of your application from that location as well. Click the Create app button to create the application.
Create locust-heroku-target – Full Image
The next screen asks you to specify the deployment method. Since I am already using GitHub for source control, I can instruct Heroku to automatically deploy whenever I make changes to the master branch. I recommend you don’t follow the same scheme for real-life applications. You should deploy to production from the master branch and use another branch such as the release branch to deploy to test environments (Git flow) or from the master branch after approvals (GitHub flow).
Link App to GitHub – locust-heroku-target – Full Image
Create Loadtest Application
Now let’s set up the Loadtest application for our Locust tests. You can create another app (locust-heroku-testengine) for the test, like this:
Create locust-heroku-testengine – Full Image
You may have noticed that I used the monorepo model to keep the Target API application and tests together in the same project.
On the next screen, connect the deployment of the application you just created to the same repository. With this setup, whenever you make changes to either the Loadtest or the Target API application, both will be deployed to Heroku, which helps to avoid any conflicts between the versions of the Loadtest and the Target API application.
Link App to GitHub – locust-heroku-testengine – Full Image
By default, the worker dynos of this application will use Standard-1x dynos, which are a great balance of cost and performance for our scenario. However, you can change the dyno type based on your requirements with the Heroku CLI or through the UI. Refer to the Heroku documentation for the CLI command and types of dynos that you can use.
Adding Buildpacks via Heroku CLI
Now let’s switch to the terminal and prepare the environment using the Heroku CLI. We’ll go through the buildpacks that our services need and add them one at a time.
How the Buildpacks Work
Heroku buildpacks are responsible for transforming your code into a “slug.” In Heroku terms, a slug is a deployable copy of your application. Not every buildpack must generate binaries from your application code—buildpacks can be linked together such that each buildpack transforms the application code in some manner and feeds it to the next buildpack in the chain. However, after processing, the dyno manager must receive a slug as an output.
For example, since our source code is organized as a monorepo consisting of the Target API application and Loadtest application, the first buildpack in the buildpack chain, heroku-buildpack-monorepo, extracts an application from the monorepo. The second buildpack in the chain builds the appropriate application.
Target API Buildpacks
Let us consider the Target API application first. Use heroku-buildpack-monorepo to extract the locust-heroku-target application from the monorepo. The next buildpack, heroku-buildpack-go, builds the Target API project.
Execute the following commands in the exact sequence to preserve their order of execution, and remember to change the name of the application in the command to what you specified in the Heroku User Interface earlier.
Loadtest Buildpacks
For the locust-heroku-testengine project, we need two buildpacks. The first buildpack is the one we used previously, heroku-buildpack-monorepo. We will modify the parameter though, so it will extract the Locust test project (locust-heroku-testengine) from the monorepo. The second buildpack, heroku-buildpack-python, enables executing Python scripts on Heroku.
Configuring Environment Variables
Via Heroku CLI
Our applications require setting a few environment variables.
Application Name | Variable | Value | Reason |
---|---|---|---|
locust-heroku-target | APP_BASE | api | Required by heroku-buildpack-monorepo to extract the project |
locust-heroku-testengine | APP_BASE | loadtest | Required by heroku-buildpack-monorepo to extract the project |
locust-heroku-testengine | PATH | /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/app/bin | Adds the bin folder present in the application loadtest to PATH so that Azcopy can be executed |
locust-heroku-testengine | SAS_TOKEN | Azure storage SAS token e.g.? sv=2019-10-10&ss=bfqt&srt=sco&sp=rwdlacupx&se=2025… | Required by Azcopy to transfer data to azure storage |
locust-heroku-testengine | TARGET_HOST | URL of the Target API application | Required by Locust to execute load tests |
Execute the following commands to add the environment variables to your applications.
Via Heroku User Interface
As I mentioned above in this article, you can configure the applications through the user interface as well. You can find the settings that we applied under the Settings tab as shown in the following screenshot of the section from the locust-heroku-target application.
Settings – locust-heroku-target – Full Image
Similarly, the following screenshot illustrates the settings that we applied to the locust-heroku-testengine application.
Settings – locust-heroku-testengine – Full Image
Deploy the Applications
Because of the existing GitHub integration, Heroku deploys our application whenever any changes are pushed to the master branch. Push your application or changes to GitHub and wait for the build to complete. You can view the logs of the build under the Activity tab of your application.
Target API application
After deployment, you can navigate to the Resources tab and view the dyno hosting the application. You can scale out the dyno from this UI. Click on the Open app button to launch the application.
Open App – Locust Heroku Target – Full Image
Loadtest application
If you navigate to the locust-heroku-testengine app, you will find that Heroku created two worker dynos by reading the instructions from the Loadtest project’s Procfile.
Worker Dynos of locust-heroku-testengine – Full Image
Execute Tests
To execute the tests hosted in the dynos, we kick them off using the Heroku CLI with the following commands. These start the one-off dynos, which then terminate right after they finish execution.
After execution, the Azcopy utility copies the CSV files containing the test results to Azure storage, which you can extract using Azure Storage Explorer. The following image illustrates this process in action.
Execute Load Tests – Full Image
You can use a custom visualizer or open the CSV files in Excel to read the test results. The following image presents part of a result that I received from the execution of worker_scene_2 dyno that executes the test present in the locustfile_scene_2.py file.
Load Test Results – Full Image
The Results
Let’s analyze the results to see how well our application is working. Every test run produces three files:
-
The failures.csv file lists the total number of failures encountered. In scenario 2 results, my run produced 28 errors from the GET /buggy endpoint, which were expected as this is how we programmed it.
-
The stats.csv file lists the endpoints to which the tests send requests and the response time in milliseconds. My run for scenario 2 shows that the swarm sent 29 and 28 requests to the GET / and GET /buggy endpoints respectively. On average, the locusts received a response from the two endpoints in 149 ms and 78 ms respectively. The percentile splits of average response time are the most valuable pieces of information generated by the load tests. From my test run I can see that 99% of the users of my API will receive a response from the GET / and GET /buggy endpoints in 430 ms and 270 ms respectively.
-
The third file, history.csv, is similar to the stats.csv file but gets a new row for every 10 seconds of the test run. By inspecting the results of this file, you can find whether your API response time is deteriorating as time passes.
Let’s also look at how much it costs to execute these tests. I hosted the tests on two Standard-1X dynos, which cost $25 each month. Therefore, if I were to let the tests execute continuously for a month, it would cost $50. Since my individual test runs lasted only two minutes, and Heroku charges for the processing time by the second, my incurred charges were so minuscule that they did not even show up on my dashboard.
That’s great, but let’s approximate the charges that testing a real-life application might incur. Let’s say on average an API requires around 10 suites of tests, and hence 10 dynos. If these tests run every night and each run lasts for five minutes, each dyno will remain active for one dyno x 300 seconds x 30 days = 9,000 seconds; hence, each dyno will cost $0.086 each month. The total cost of running 10 load-test dynos (one-off dynos) for an entire month will be around $0.87.
Conclusion
You are now ready to execute load tests on Heroku using Locust. You’ll be able to test the stability and performance of every deployment. Since one-off dynos are charged only for the time and resources that they consume, you’ll get maximum value from every cent that you spend.

Testing and Staging Environments in eCommerce Implementation
ECommerce implementation is not easy. It requires careful planning and execution from the developer’s part. One way to look into it is to ensure proper testing.
In this article, we will look at the testing and staging environments in eCommerce implementation.
Is SDLC enough? Is Testing Required?
Before we go and do so, let’s quickly check out the Software development Lifecycle (SDLC) which is at the cornerstone of any development out there. The SDLC involves multiple stages including planning, design, analysis, maintenance, deployment, and of course testing.
However, right now, it is hard to follow the strict flow of SDLC. This is where agile development comes in. In any of the development methodologies, testing is crucial as it is managed by the QA engineers and testers to ensure that the final website or product is as polished as possible.
The Role of Minimum Viable Product(MVP)
To ensure that your development team handles the core idea of excellency, they need to make sure that they go to the customer and learn from them about the product and take feedback positively. By doing so, they will be able to build a minimum viable product (MVP) which will ensure a proper development trajectory. By doing so, you will be able to iron out any communication and ensure that the feedback loop is of high quality. The other aspects of communication
Another thing that needs to ensure is that testing is done properly before the code is pushed from the development environment to production.
MVPs should be tested as thoroughly as possible. This includes testing for different operating systems including MAC, Windows, and Linux. You can go further deep by trying out your eCommerce site using VPNs for Macbook as VPNs are essential tools for data security.
As you can see, testing is required to make sure that there are no defects in both the design and system. Testing is a process of identifying the issues and solving them.
In fact, testing should also be done on web hosting. The test should be done based on reliability, load, and so on. If the development team is using a website builder, then also, they need to make sure that there are no bugs left due to the website builder. In short, proper end-to-end testing needs to be done to ensure no lapse of quality happens when the site goes online.
Even if you are not creating an eCommerce site, then also you should take care of testing before releasing your site officially.
Understanding the Difference: Development Environment, Testing Environment, and Staging Environment
Before we move forward, we need to get a better understanding of the different development environments. Let’s get started.
Development environment: Developers use the development environment to develop things! They configure the environment so that they can write code and test it before making it live. Generally, the development environment is smaller and does not exactly match the real-world scenario. It also comes with tools that are developer-specific and has gone through a rigorous QA validation.
One more thing that makes it unique is that it is constantly evolving with new functionality. This might make work for a development environment, but it equally makes the work of the QA engineers and testers harder. This is where the testing environment comes in which we will discuss below.
Testing environment: The testing environment is specifically created for testing purposes. This is where QA engineers and testers run their testing tools. The tests are pre-defined or automated to run over the application code which is taken directly from the development environment. The developers while writing the code write small tests, but the testing environment is where the majority of the testing is done. The tests are done based on different criteria where the app is tested based on multiple environments, use-cases, and so on.
Staging environment: So, where does the staging environment come in? It is basically an environment where user-acceptance testing is important. Here, the exact replica of the main site or app is created and changes are made according to the requirement. So, if your eCommerce site requires some changes, it will be pushed to the staging environment. There all the changes are made and then finally the app or site is pushed to the users. This is the best place to test the code quality as the testing is done based on how the user will interact with it.
Importance of Staging Environment
There is no doubt that staging environments are important for any project. But for eCommerce projects, it is way more important as an eCommerce site needs to care about the user experience as it directly affects the sales. They also need to use it to incorporate small changes to the site without the need to go back to the development environment.
Most of the hosting providers provide a way to create and manage a staging environment where your developer team can work on, creating a seamless way to manage changes and improve user experience.
To set up the staging site, you can connect with the hosting provider. You can also set it up yourself by installing it just like your main site. Moreover, most of the plugins or services will work on the staging site without the need to buy a new license. One more thing that you need to know is that staging sites are more about functionality than content.
If you go your own route of installing it, then make sure that you replicate your site including a complete database clone of your live site. Once you make the changes, simply transfer the site to the live server, and you are done!
Conclusion
This leads us to the end of our testing and staging environment in eCommerce implementation. There is no doubt that there is a need for testing for the eCommerce site. The staging sites also help as they can help to focus on user-centric testing and is also useful to make small changes to the site.
So, will you focus on creating a testing and staging environment for your eCommerce site? If so, tell us in the comments, how you manage to carry it out, so that other readers can learn from it.
Author: Spyre Studios

BigCommerce Provides Dedicated Technical Account Manager to Americaneagle.com
Americaneagle.com is one of just a few partners with this advantage, giving customers exclusive access to dedicated BigCommerce support
CHICAGO, June 24, 2020 /PRNewswire/ — Americaneagle.com, a full-service, global digital agency, has just strengthened its strategic alliance with one of its top partners. Leading SaaS-based ecommerce platform, BigCommerce is now providing additional support to Americaneagle.com through a dedicated Technical Account Manager (TAM). With this TAM and Americaneagle.com’s Elite Partner status, agency clients that are on BigCommerce can get the support they need for complex builds, implementations, upgrades, and troubleshooting all from one place.
Acting as an advocate for Americaneagle.com clients, the BigCommerce Technical Account Manager will provide guidance and operational management. From the early sales process to production and post-launch, the TAM will assist the Americaneagle.com team with platform configurations, escalate and prioritize cases, give key recommendations for implementations, and provide expert advice on upcoming releases and enhancements. All of these benefits give clients further peace of mind and a greater return on their BigCommerce investment.
Jon Elslager, Americaneagle.com’s BigCommerce Practice Manager said: “For our customers at Americaneagle.com having access to a Technical Account Manager give us the needed assistance within BigCommerce to escalate changes and to keep a pulse on all things new to the platform. Moreover, it helps us to accelerate the rate at which we deliver projects and allows us to more easily stay on time and budget.”
Americaneagle.com has been a BigCommerce partner for over 5 years, launching several large scale implementations for clients like Berlin Packaging, Carson Dellosa, and Ohio State University. As an Elite Partner, the team has been at the forefront of the platform’s enhancements and along with developing several connectors and tools within the BigCommerce marketplace. The TAM will amplify all of these efforts and strengthen the agency’s tight-knit partnership with BigCommerce.
About Americaneagle.com
Americaneagle.com is a full-service, global digital agency based in Des Plaines, Illinois that provides best-in-class web design, development, hosting, post-launch support and digital marketing services. Currently, Americaneagle.com employs 500+ professionals in offices around the world including Chicago, Cleveland, Dallas, London, Los Angeles, New York, Nashville, Washington DC, Switzerland, and Bulgaria. Some of their 2,000+ clients include Berlin Packaging, Delasco, The Ohio State University, Stuart Weitzman, WeatherTech, and Monticello. For additional information, visit www.americaneagle.com
Contact
Michael Svanascini, President
[email protected]
847-699-0300
View original content to download multimedia: http://www.prnewswire.com/news-releases/bigcommerce-provides-dedicated-technical-account-manager-to-americaneaglecom-301082671.html
SOURCE Americaneagle.com

New workshop: There’s a typo on the homepage! Website redesign strategy
Marketing websites pose an almost universal challenge at companies of all sizes. Who manages it? Who updates it? Who designs it? How often do you update it? What platform should it be built on? How do you know if it’s working? These are just a few of the questions you’ve likely heard or asked yourself.
We’re hosting a free 1-hour workshop June 30th to answer all your questions
A common scenario will feature a dejected marketing team that can’t even independently fix a copy typo, a development team unhappily getting pulled off product work, and founders frustrated by the lack of progress on their pride and joy, the company virtual window display – the marketing site.
A marketing team making independent updates to the website sounds like a simple need, but what are we really talking about? Brand strength, content strategy, visitor experience, lead conversion, sales enablement. We’re talking about all things marketing that contribute to achieving your business goals.
Web design and development companies like thoughtbot are not immune to these challenges and usually face one of two website challenges – 1) the site is super simple, a few pages, and never really gets updated. Or 2) many different designers and developers are building on the site independently without consistency or a centralized strategy.
I’ve faced these scenarios as a marketing leader at companies large and small including with our very own thoughtbot.com. But last year we went through a redesign process in which I learned things that will forever change my approach to websites and design. With some phenomenal colleagues including our Chief Design Officer, we took a design-led, user-first approach to re-envisioning what thoughtbot.com is and what it’s goals are.
In the process, we developed brand guidelines, voice and tone docs, a design system, transitioned to the Prismic CMS, developed a content workflow built on the Jobs-to-be-done framework, and refocused the site around clear business objectives.
The results are positive and we’re eager to share our learnings and processes with you so you can benefit from our work.
We’ll be walking through it all during a 1-hour workshop on Tuesday June 30th at 12pm ET.
I’m dedicating this one to anyone who’s ever received an URGENT text, email, or DM that reads “There’s a typo on the homepage!”

AWS Announces Amazon Honeycode
SEATTLE–(BUSINESS WIRE)–Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ:AMZN), today announced Amazon Honeycode, a fully managed service that allows customers to quickly build powerful mobile and web applications – with no programming required. Customers who need applications to track and manage things like process approvals, event scheduling, customer relationship management, user surveys, to-do lists, and content and inventory tracking no longer need to do so by error-prone methods like emailing spreadsheets or documents, or hiring and waiting for developers to build costly custom applications. With Amazon Honeycode, customers can use a simple visual application builder to create highly interactive web and mobile applications backed by a powerful AWS-built database to perform tasks like tracking data over time and notifying users of changes, routing approvals, and facilitating interactive business processes. Using Amazon Honeycode, customers can create applications that range in complexity from a task-tracking application for a small team to a project management system that manages a complex workflow for multiple teams or departments. Customers can get started creating applications in minutes, build applications with up to 20 users for free, and only pay for the users and storage for larger applications. To get started with Amazon Honeycode, visit http://honeycode.aws.
Today’s customers have a growing need to track data over time, manage workflows involving multiple people, and facilitate complex business processes. For example, customers regularly perform important business functions like managing field agents, performing PO approvals, scheduling weekly events, reporting employee or team activities, tracking task progress, following customer activity, surveying end users, managing content, inventorying resources, and many more of these activities. Many teams try to use simple spreadsheets as a Band-Aid to manage these tasks, but spreadsheets lack true database-like capabilities to sort and filter data, make collaboration with others hard to do, and are difficult to use on mobile devices. Customers try to solve for the static nature of spreadsheets by emailing them back and forth, but all of the emailing just compounds the inefficiency because email is slow, doesn’t scale, and introduces versioning and data syncing errors. As a result, people often prefer having custom applications built, but the demand for custom programming often outstrips developer capacity, creating a situation where teams either need to wait for developers to free up or have to hire expensive consultants to build applications. What usually happens instead is that these applications just never get built. The chasm between using spreadsheets and building custom applications creates a situation where customers often experience unnecessary inefficiency, waste, and inaction.
What customers want is the ability to create applications using the simplicity and familiarity of a spreadsheet, but with the data management capability of a database, the collaboration and notifications common in business applications, and a truly seamless web and mobile user experience. That’s what Amazon Honeycode delivers. Amazon Honeycode relies on the familiar interface of a spreadsheet, but under the hood, offers the power of an AWS-developed database, so customers can easily sort, filter, and link data together to create data-driven, interactive applications. Users can easily create dynamic views and dashboards that are updated in real-time as the underlying data changes – something that is hard to do even with powerful relational databases. Applications built using Amazon Honeycode leverage the full power and scale of AWS, and can easily scale up to 100,000 rows in each workbook, without users having to worry about building, managing, and maintaining the underlying hardware and software. Amazon Honeycode does all of this under the covers by automating the process of building and linking the three tiers of functionality found in most business applications (database, business logic, and user interface), and then deploying fully interactive web and mobile applications to end users so customers can focus on creating great applications without having to worry about writing code or scaling infrastructure.
In Amazon Honeycode, customers can get started by selecting a pre-built template, where the data model, business logic, and applications are pre-defined and ready-to-use (e.g. PO approvals, time-off reporting, inventory management, etc.). Or, they can import data into a blank workbook, use the familiar spreadsheet interface to define the data model, and design the application screens with objects like lists, buttons, and input fields. Builders can also add automations to their applications to drive notifications, reminders, approvals, and other actions based on conditions. Once the application is built, customers simply click a button to share it with team members. With Amazon Honeycode, customers can quickly and easily build multi-user, scalable, and collaborative web and mobile applications that allow them to act on the data that would otherwise be locked away in static spreadsheets.
“Customers have told us that the need for custom applications far outstrips the capacity of developers to create them,” said Larry Augustin, Vice President, Amazon Web Services, Inc. “Now with Amazon Honeycode, almost anyone can create powerful custom mobile and web applications without the need to write code.”
Amazon Honeycode is available today in US West (Oregon) with more regions coming soon.
Slack is the leading channel-based messaging platform. “We’re excited about the opportunity that Amazon Honeycode creates for teams to build apps to drive and adapt to today’s ever-changing business landscape,” said Brad Armstrong, VP of Business and Corporate Development, Slack. “We see Amazon Honeycode as a great complement and extension to Slack and are excited about the opportunity to work together to create ways for our joint customers to work more efficiently and to do more with their data than ever before.”
SmugMug is a paid image sharing, image hosting service, and online video platform on which users can upload photos and videos. “We are excited to see the opportunity that Amazon Honeycode creates for our teams to build applications that help them respond to changing business conditions,” said Don MacAskill, CEO & Chief Geek, SmugMug & Flickr. “Based upon how easy it is to create new applications, it should really help our teams, and we can see it really taking off.”
About Amazon Web Services
For 14 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 175 fully featured services for compute, storage, databases, networking, analytics, robotics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 76 Availability Zones (AZs) within 24 geographic regions, with announced plans for nine more Availability Zones and three more AWS Regions in Indonesia, Japan, and Spain. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs. To learn more about AWS, visit aws.amazon.com.
About Amazon
Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit amazon.com/about and follow @AmazonNews.

WordPress costs: custom website vs e …
Transcript
codeable WordPress pricing debunked: how much does a WordPress website and an e-commerce cost? WordPress market share as of April 2016 #1 26.4% 60M WordPress is used by more than 26.4% of the top 10 million websites WordPress is used by more than 60 million websites WordPress is the most popular CMS on the Web WordPress is Open Source and free to download You can dowload it for free at https://wordpress.org/download/ 24 Costs breakdown for a WordPress business website (6 pages + a blog page)PLANNING STAGE COPYWRITING DESIGN THEME DEVELOPMENT 4 PEOPLE INVOLVED 1 PERSON INVOLVED 1 PERSON INVOLVED 1 PERSON INVOLVED designer, copywriter, UX designer and developer Total cost: Total cost: Total cost: Total cost: $9,600 $4,800 $4,800 $7,200 for 3 weeks full time for roughly a 2 weeks full time work. for a 1 week full for roughly a 2 weeks time work. full time work. work. Grand total: $26,400 Bought an off-the-shelf theme? (ThemeForest, Mojo Themes, TemplateMonster, etc.) YOU WOULD STILL NEED: DESIGNING MATERIALS COPYWRITING THEME IMPLEMENTATION (logo, photos, color palettes, etc.): 1 PERSON INVOLVED 1 PERSON INVOLVED 1 PERSON INVOLVED Total cost: Total cost: Total cost: $2,400 $4,800 $2,400 for roughly a 1 for roughly a 2 weeks full time for roughly a 1 week full time work. week full time work. 24 Grand total: $9,600 Did you know…? WordPress themes available on marketplaces, usually sold for $30-$100, can cost up to $50K to those who developed them? That’s why they need to sell thousands of them at low prices. WOO What if You need an online store? WooCommerce market share as of May 2016 37% 19% WooCommerce is adopted by 37% of the entire web And by 19% of top millions websites WooCommerce is an Open Source e-commerce plugin for WordPress. You can dowload it for free at https://www.woothemes.com/woocommercel Building a WooCommerce shop 29 369 Themes extensions There are 369 available There are 29 available themes, both free and premium themes starting at $39 extensions, priced from $0 up to $249 Elements that pile up for the final cost of an e-commerce Reliable and scalable hosting 3rd-party fees (like PayPal) UX Planning Visual Design Copywriting Development and Documentation SSL certificate Imagery Domain Testing Can you do all by yourself YES NO Total cost: -$1.000 for a good hosting company, a paid SSL certificate, a premium theme, as well as some paid extensions.. A typical budget for a small to medium WooCommerce custom eCommerce project would be between $5,000 $20,000 24 24 Grand total: Grand total: ~$1,000 $5,000 – $20,000 Don’t forget to account for maintenance costs As rule of thumb, consider something in between the 3% to 10% of the revenue generated by the e-commerce the costs provided are based on a $60 hourly rate tier for each involved person, which is how much we averagely charge here at Codeable. References: http://w3techs.com/technologies/overview/content_management/all/ http://www.forbes.com/sites/jcolao/2012/09/05/the-internets-mother-tongue/ http://trends.builtwith.com/ http://prospress.com/cost-of-setting-up-an-online-store/ http://www.commercegurus.com/2015/01/25/much-ecommerce-website-cost/ http://www.kriesi.at/archives/wordpress-website-cost https://www.woothemes.com/2013/09/why-we-often-price-e-commerce-projects-wrong/ https://codeable.io/blog %24
Xi-Editor Retrospective
A bit more than four years ago I started the xi-editor project. Now I have placed it on the back burner (though there is still some activity from the open source community).
The original goal was to deliver a very high quality editing experience. To this end, the project spent a rather large number of “novelty points”:
- Rust as the implementation language for the core.
- A rope data structure for text storage.
- A multiprocess architecture, with front-end and plug-ins each with their own process.
- Fully embracing async design.
- CRDT as a mechanism for concurrent modification.
I still believe it would be possible to build a high quality editor based on the original design. But I also believe that this would be quite a complex system, and require significantly more work than necessary.
I’ve written the CRDT part of this retrospective already, as a comment in response to a Github issue. That prompted good discussion on Hacker News. In this post, I will touch again on CRDT but will focus on the other aspects of the system design.
Origins
The original motivation for xi came from working on the Android text stack, and confronting two problems in particular. One, text editing would become very slow as the text buffer got bigger. Two, there were a number of concurrency bugs in the interface between the EditText widget and the keyboard (input method editor).
The culprit of the first problem turned out to be the SpanWatcher interface, combined with the fact that modern keyboards like to put a spelling correction span on each word. When you insert a character, all the successive spans bump their locations up by one, and then you have to send onSpanChanged for each of those spans to all the watchers. Combined with the fact that the spans data structure had a naive O(n) implementation, and the whole thing was quadratic or worse.
The concurrency bugs boil down to synchronizing edits across two different processes, because the keyboard is a different process than the application hosting the EditText widget. Thus, when you send an update (to move the cursor, for example) and the text on the other side is changing concurrently, it’s ambiguous whether it refers to the old or new location. This was handled in an “almost correct” style, with timeouts for housekeeping updates to minimize the chance of a race. A nice manifestation of that is that swiping the cursor slowly through text containing complex emoji could cause flashes of the emoji breaking.
These problems have a unifying thread: in both cases there are small diffs to the text, but then the data structures and protocols handled these diffs in a less than optimal way, leading to both performance and correctness bugs.
To a large extent, xi started as an exploration into the “right way” to handle text editing operations. In the case of the concurrency bugs, I was hoping to find a general, powerful technique to facilitate concurrent text editing in a distributed-ish system. While most of the Operational Transformation literature is focused on multiple users collaboratively editing a document, I was hoping that other text manipulations (like an application enforcing credit card formatting on a text input field) could fit into the general framework.
That was also the time I was starting to get heavily into Rust, so it made natural sense to start prototyping a new green-field text editing engine. How would you “solve text” if you were free of backwards compatibility constraints (a huge problem in Android)?
When I started, I knew that Operational Transformation was a solution for collaborative editing, but had a reputation for being complex and finicky. I had no idea how deep the rabbithole would be of OT and then CRDT. Much of that story is told in the CRDT discussion previously linked.
The lure of modular software
There is an extremely long history of people trying to build software as composable modules connected by some kind of inter-module communication fabric. Historical examples include DCE/RPC, Corba, Bonobo, and more recently things like Sandstorm and Fuchsia Modular. There are some partial successes, including Binder on Android, but this is still mostly an unrealized vision. (Regarding Binder, it evolved from a much more idealistic vision, and I strongly recommend reading this 2006 interview about OpenBinder).
When I started xi, there were signs we were getting there. Microservices were becoming popular in the Internet world, and of course all Web apps have a client/server boundary. Within Google, gRPC was working fairly well, as was the internal process separation within Chrome. In Unix land, there’s a long history of the terminal itself presenting a GUI (if primitive, though gaining features such as color and mouse). There’s also the tradition of Blit and then, of course, NeWS and X11.
I think one of the strongest positive models was the database / business logic split, which is arguably the most successful example of process separation. In this model, the database is responsible for performance and integrity, and the business logic is in a separate process, so it can safely do things like crash and hang. I very much thought of xi-core as a database-like engine, capable of handling concurrent text modification much like a database handles transactions.
Building software in such a modular way requires two things: first, infrastructure to support remote procedure calls (including serialization of the requests and data), and second, well-defined interfaces. Towards the end of 2017, I saw the goal of xi-editor as primarily being about defining the interfaces needed for large scale text editing, and that this work could endure over a long period of time even as details of the implementation changed.
For the infrastructure, we chose JSON (about which more below) and hand-rolled our own xi-rpc layer (based on JSON-RPC). It turns out there are a lot of details to get right, including dealing with error conditions, negotiating when two ends of the protocol aren’t exactly on the same version, etc.
One of the bolder design decisions in xi was to have a process separation between front-end and core. This was inspired in part by Neovim, in which everything is a plugin, even GUI. But the main motivation was to build GUI applications using Rust, even though at the time Rust was nowhere near capable of native GUI. The idea is that you use the best GUI technology of the platform, and communicate via async pipes.
One argument for process separation is to improve overall system reliability. For example, Chrome has a process per tab, and if the process crashes, all you get is an “Aw, snap” without bringing the whole browser down. I think it’s worth asking the question: is it useful to have the front-end continue after the core crashes, or the other way around? I think probably not; in the latter case it might be able to safely save the file, but you can also do that by frequently checkpointing.
Looking back, I see much of the promise of modular software as addressing goals related to project management, not technical excellence. Ideally, once you’ve defined an inter-module architecture, then smaller teams can be responsible for their own module, and the cost of coordination goes down. I think this type of project management structure is especially appealing to large companies, who otherwise find it difficult to manage larger projects. And the tax of greater overall complexity is often manageable, as these big companies tend to have more resources.
JSON
The choice of JSON was controversial from the start. It did end up being a source of friction, but for surprising reasons.
The original vision was to write plug-ins in any language, especially for things like language servers that would be best developed in the language of that ecosystem. This is the main reason I chose JSON, because I expected there would be high quality implementations in every viable language.
Many people complained about the fact that JSON escapes strings, and suggested alternatives such as MessagePack. But I knew that the speed of raw JSON parsing was a solved problem, with a number of extremely high performance implementations (simdjson is a good example).
Even so, aside from the general problems of modular software as described above, JSON was the source of two additional problems. For one, JSON in Swift is shockingly slow. There are discussions on improving it but it’s still a problem. This is surprising to me considering how important it is in many workloads, and the fact that it’s clearly possible to write a high performance JSON implementation.
Second, on the Rust side, while serde is quite fast and very convenient (thanks to proc macros), when serializing a large number of complex structures, it bloats code size considerably. The xi core is 9.3 megabytes in a Linux release build (debug is an eye-watering 88MB), and a great deal of that bloat is serialization. There is work to reduce this, including miniserde and nanoserde, but serde is still by far the most mainstream.
I believe it’s possible to do performant, clean JSON across most languages, but people should know, we’re not there yet.
The rope
There are only a few data structures suitable for representation of text in a text editor. I would enumerate them as: contiguous string, gapped buffer, array of lines, piece table, and rope. I would consider the first unsuitable for the goals of xi-editor as it doesn’t scale well to large documents, though its simplicity is appealing, and memcpy is fast these days; if you know your document is always under a megabyte or so, it’s probably the best choice.
Array of lines has performance failure modes, most notably very long lines. Similarly, many good editors have been written using piece tables, but I’m not a huge fan; performance is very good when first opening the file, but degrades over time.
My favorite aspect of the rope as a data structure is its excellent worst-case performance. Basically, there aren’t any cases where it performs badly. And even the concern about excess copying because of its immutability might not be a real problem; Rust has a copy-on-write mechanism where you can mutate in-place when there’s only one reference to the data.
The main argument against the rope is its complexity. I think this varies a lot by language; in C a gapped buffer might be preferable, but I think in Rust, a rope is the sweet spot. A large part of the reason is that in C, low level implementation details tend to leak through; you’ll often be dealing with a pointer to the buffer. For the common case of operations that don’t need to span the gap, you can hand out a pointer to a contiguous slice, and things just don’t get any simpler than that. Conversely, if any of the invariants of the rope are violated, the whole system will just fall apart.
In Rust, though, things are different. Proper Rust style is for all access to the data structure to be mediated by a well-defined interface. Then the details about how that’s implemented are hidden from the user. A good way to think about this is that the implementation has complexity, but that complexity is contained. It doesn’t leak out.
I think the rope in xi-editor meets that ideal. A lot of work went into getting it right, but now it works. Certain things, like navigating by line and counting UTF-16 code units, are easy and efficient. It’s built in layers, so could be used for other things including binary editing.
One of the best things about the rope is that it can readily and safely be shared across threads. Ironically we didn’t end up making much use of that in xi-editor, as it was more common to share across processes, using sophisicated diff/delta and caching protocols.
A rope is a fairly niche data structure. You really only want it when you’re dealing with large sequences, and also doing a lot of small edits on them. Those conditions rarely arise outside text editors. But for people building text editing in Rust, I think xi-rope holds up well and is one of the valuable artifacts to come from the project.
There’s a good HN discussion of text editor data structures where I talk about the rope more, and can also point people to the Rope science series for more color.
Async is a complexity multiplier
We knew going in that async was going to be a source of complexity. The hope is that we would be able to tackle the async stuff once, and that the complexity would be encapsulated, much as it was for the rope data structure.
The reality was that adding async made everything more complicated, in some cases considerably so. A particularly difficult example was dealing with word wrap. In particular, when the width of the viewport is tied to the window, then live-resizing the window causes text to rewrap continuously. With the process split between front-end and core, and an async protocol between them, all kinds of interesting things can go wrong, including races between editing actions and word wrap updates. More fundamentally, it is difficult to avoid tearing-style artifacts.
One early relative success was implementing scrolling. The problem is that, as you scroll, the front-end needs to sometimes query the core to fetch visible text that’s outside its cache. We ended up building this, but it took months to get it right. By contrast, if we just had the text available as an in-process data structure for the UI to query, it would have been quite straightforward.
I should note that async in interactive systems is more problematic than the tamer variety often seen in things like web servers. There, the semantics are generally the same as simple blocking threads, just with (hopefully) better performance. But in an interactive system, it’s generally possible to observe internal states. You have to display something, even when not all subqueries have completed.
As a conclusion, while the process split with plug-ins is supportable (similar to the Language Server protocol), I now firmly believe that the process separation between front-end and core was not a good idea.
Syntax highlighting
Probably the high point of the project was the successful implementation of syntax highlighting, based on Tristan Hume’s syntect library, which was motivated by xi. There’s a lot more to say about this.
First, TextMate / Sublime style syntax highlighting is not really all that great. It is quite slow, largely because it grinds through a lot of regular expressions with captures, and it is also not very precise. On the plus side, there is a large and well-curated open source collection of syntax definitions, and it’s definitely “good enough” for most use. Indeed, code that fools these syntax definitions (such as two open braces on the same line) is a good anti-pattern to avoid.
It may be surprising just how much slower regex-based highlighting is than fast parsers. The library that xi uses, syntect, is probably the fastest open source implementation in existence (the one in Sublime is faster but not open source). Even so, it is approximately 2500 times slower for parsing Markdown than pulldown-cmark. And syntect doesn’t even parse setext-style lists correctly, because Sublime style syntax definitions have to work line-at-a-time, and the line of dashes following a heading isn’t available until the next line.
These facts influenced the design of xi in two important ways. First, I took it as a technical challenge to provide a high-performance editing experience even on large files, overcoming the performance problems through async. Second, the limitations of the regex-based approach argued in favor of a modular plug-in architecture, so that as better highlighters were developed, they could be plugged in. I had some ambitions of creating a standard protocol that could be used by other editors, but this absolutely failed to materialize. For example, Atom instead developed tree-sitter.
In any case, I dug in and did it. The resulting implementation is impressive in many ways. The syntax highlighter lives in a different process, with asynchronous updates so typing is never slowed down. It’s also incremental, so even if changes ripple through a large file, it updates what’s on the screen quickly. Some of the sophistication is described in Rope science 11.
There was considerable complexity in the implementation. Text was synchronized between the main xi-core process and the plug-in, but for large files, the latter stores only a fixed-size cache; the cache protocol ended up being quite sophisticated. Updates were processed through a form of Operational Transformation, so if a highlighting result raced a text edit, it would never color an incorrect region (this is still very much a problem for language server annotations).
As I said, syntax highlighting was something of a high point. The success suggested that a similar high-powered engineering approach could systematically work through the other problems. But this was not to be.
As part of this work, I explored an alternative syntax highlighting engine based on parser combinators. If I had pursued that, the result would have been lightning fast, of comparable quality to the regex approach, and difficult to create syntax descriptions, as it involved a fair amount of manual factoring of parsing state. While the performance would have been nice to have, ultimately I don’t think there’s much niche for such a thing. If I were trying to create the best possible syntax highlighting experience today, I’d adapt Marijn Haverbeke’s Lezer.
To a large extent, syntax highlighting is a much easier problem than many of the others we faced, largely because the annotations are a history-free function of the document’s plain text. The problem of determining indentation may seem similar, but is dependent on history. And it basically doesn’t fit nicely in the CRDT model at all, as that requires the ability to resolve arbitrarily divergent edits between the different processes (imagine that one goes offline for a bit, types a bit, then the language server comes back online and applies indentation).
Another problem is that our plug-in interface had become overly specialized to solve the problems of syntax highlighting, and did not well support the other things we wanted to do. I think those problems could have been solved, but only with significant difficulty.
There is no such thing as native GUI
As mentioned above, a major motivation for the front-end / core process split was to support development of GUI apps using a polyglot approach, as Rust wasn’t a suitable language for building GUI. The theory was that you’d build the GUI using whatever libraries and language that was most suitable for the platform, basically the platform’s native GUI, then interact with the Rust engine using interprocess communication.
The strongest argument for this is probably macOS, which at the time had Cocoa as basically the blessed way to build GUI. Most other platforms have some patchwork of tools. Windows is particularly bad in this respect, as there’s old-school (GDI+ based) win32, WinForms, WPF, Xamarin, and most recently WinUI, which nobody wants to use because it’s Windows 10 only. Since xi began, macOS is now catching up in the number of official frameworks, with Catalyst and SwiftUI added to the roster. Outside the realm of official Apple projects, lots of stuff is shipping in Electron these days, and there are other choices including Qt, Flutter, Sciter, etc.
When doing some performance work on xi, I found to my great disappointment that performance of these so-called “native” UI toolkits was often pretty poor, even for what you’d think of as the relatively simple task of displaying a screenful of text. A large part of the problem is that these toolkits were generally made at a time when software rendering was a reasonable approach to getting pixels on screen. These days, I consider GPU acceleration to be essentially required for good GUI performance. There’s a whole other blog post in the queue about how some toolkits try to work around these performance limitations by leveraging the compositor more, but that has its own set of drawbacks, often including somewhat ridiculous RAM usage for all the intermediate textures.
I implemented an OpenGL-based text renderer for xi-mac, and did similar explorations on Windows, but this approach gives up a lot of the benefits of using the native features (as a consequence, emoji didn’t render correctly). Basically, I discovered that there is a pretty big opportunity to build UI that doesn’t suck.
Perhaps the most interesting exploration was on Windows, the xi-win project. Originally I was expecting to build the front-end in C# using one of the more mainstream stacks, but I also wanted to explore the possibility of using lower-level platform capabilities and programming the UI in Rust. Early indications were positive, and this project gradually morphed into Druid, a native Rust GUI toolkit which I consider very promising.
If I had said that I would be building a GUI toolkit from scratch as part of this work when I set out, people would have rightly ridiculed the scope as far too ambitious. But that is how things are turning out.
Fuchsia
An important part of the history of the project is its home in Fuchsia for a couple years. I was fortunate that the team was willing to invest in the xi vision, including funding Colin’s work and letting me host Tristan to build multi-device collaborative editing as an intern project. In many ways the goals and visions aligned, and the demo of that was impressive. Ultimately, though, Fuchsia was not at the time (and still isn’t) ready to support the kind of experience that xi was shooting for. Part of the motivation was also to develop a better IME protocol, and that made some progress (continued by Robert Lord, and you can read about some of what we discovered in Text Editing Hates You Too).
It’s sad this didn’t work out better, but such is life.
A low point
My emotional tone over the length of the project went up and down, with the initial enthusiasm, stretches of slow going, a renewed excitement over getting the syntax highlighting done, and some other low points. One of those was learning about the xray project. I probably shouldn’t have taken this personally, as it is very common in open source for people to spin up new projects for a variety of reasons, not least of which is that it’s fun to do things yourself, and often you learn a lot.
Even so, xray was a bit of a wake-up call for me. It was evidence that the vision I had set out for xi was not quite compelling enough that people would want to join forces. Obviously, the design of xray had a huge amount of overlap with xi (including the choice of Rust and decision to use a CRDT), but there were other significant differences, particularly the choice to use Web technology for the UI so it would be cross-platform (the fragmented state of xi front-ends, especially the lack of a viable Windows port, was definitely a problem).
I’m putting this here because often, how you feel about a project is just as important, even more so, than technical aspects. I now try to listen more deeply to those emotional signals, especially valid criticisms.
Part of the goal of the project was to develop a good open-source community. We did pretty well, but looking back, there are some things we could have done better.
A lot of the friction was simply the architectural burden described above. But in general I think the main thing we could have done better is giving contributors more agency. If you have an idea for a feature or other improvement, you should be able to come to the project and do it. The main role of the maintainers should be to help you do that. In xi, far too often things were blocking on some major architectural re-work (we have to redo the plug-in API before you can implement that feature). One of the big risks in a modular architecture is that it is often expedient to implement things in one module when to do things “right” might require it in a different place, or, even worse, require changes in inter-module interfaces. We had these decisions a lot, and often as maintainers we were in a gate-keeping role. One of the worst examples of this was vi keybindings, for which there was a great deal of community interest, and even a project done off to the side to try to achieve it, but never merged into the main project.
So I think monolithic architectures, perhaps ironically, are better for community. Everybody takes some responsibility for the quality of the whole.
In 2017 we hosted three Google Summer of Code Students: Anna Scholtz, Dzũng Lê, and Pranjal Paliwal. This worked out well, and I think GSoC is a great resource.
I have been fortunate for almost the entire time to have Colin Rofls taking on most of the front-line community interaction. To the extent that xi has been a good community, much of the credit is due him.
One of the things we have done very right is setting up a Zulip instance. It’s open to all with a Github account, but we have had virtually no difficulty with moderation issues. We try to maintain positive interactions around all things, and lead by example. This continues as we pivot to other things, and may be one of the more valuable spin-offs of the project.
Conclusion
The xi-editor project had very ambitious goals, and bet on a number of speculative research subprojects. Some of those paid off, others didn’t. One thing I would do differently is more clearly identify which parts are research and which parts are reasonably straightforward implementations of known patterns. I try to do that more explicitly today.
To a large extent the project was optimized for learning rather than shipping, and through that lens it has been pretty successful. I now know a lot more than I did about building editor-like GUI applications in Rust, and am now applying that to making the Druid toolkit and the Runebender font editor. Perhaps more important, because these projects are more ambitious than one person could really take on, the community started around xi-editor is evolving into one that can sustain GUI in Rust. I’m excited to see what we can do.
Discuss on Hacker News and /r/rust.

Why Frontend Developers should learn Firebase in 2020?
While Firebase is around for quite some time, it really got the traction in the last couple of years after the popularity of Google Cloud Platform increased, and several other Firebase services were introduced. If you are really React.js, Angular, Vue.js, or any other frontend development framework, you will benefit from Firebase. It provides an online, free database and several other useful services like FireStore, FireAuth, and Firebase Cloud function. Firebase is equally useful for Mobile developers as well as people using Swift, iOS, and Android to create mobile apps. They can also use Firebase services to create the backend for their application.
More often than not Frontend developer stuck if there is no API to consume, they need an API to download data, authenticate the user and make payments and if that’s not available then they don’t make progress.
While many companies have both frontend and backend developers which work in tandem but for POC and demo, you feel stuck when there is no backend developer and you don’t know how to setup backend. Firebase solves that problem by providing you a database and pre-built APIs, and Authentication and Payment support.
And, I can say from my experience that if you can handle CRUD, authentication, and payment then you are more or less can do POC for any application.
In this article, I am going to tell you why frontend and mobile developers should learn Firebase and how it can help them to quickly create a web application or mobile apps in 2020.
I first come to know about Firebase when I was learning Vue.js and looking for a public API to develop my application. I end up using Github User API, which was good for loading users and showing their repositories but you don’t have control over data, Firebase allows that to you.
What is Firebase Exactly?
If you don’t know, Firebase is an online, free service provided by Google which acts as a very feature-rich, fully-fledged back-end to both mobile and web applications. Frontend Developers can use Firebase to store and retrieve data to and from a NoSQL database called Firestore, as well as to authenticate their app’s users with the Firebase Auth service.
While Firestore, the Realtime Database is really just one big JSON object that the developers can manage in realtime. It allows you to set up the data you want for your application, while Firebase Authentication is built by the same people who created Google Sign-in, Smart Lock, and Chrome Password Manager.
Firebase also provides a service called Firebase Cloud Functions which allows you to run server-side JavaScript code in a Node.js environment, and you can also deploy all of your applications to Firebase hosting.
What are Important Firebase Services for Frontend Developers?
Actually, there are a lot more Firebase services then I have mentioned here and you can basically divide them into two categories like Development and Testing Services and Analytics services.
Here is a list of some of the most popular Development and Testing Firebase Services for Frontend Developers:
- Realtime Database
- Auth
- Test Lab
- Crashlytics
- Cloud Functions
- Firestore
- Cloud Storage
- Performance Monitoring
- Crash Reporting
- Hosting
- Grow & Engage your audience
The best thing about Firebase is that with just a single API, the Firebase database provides your app with both the current value of the data and any updates to that data.
Why Frontend Developers should learn Firebase?
So now that you know the capabilities of Firebase, we can summarise why Frontend Developer should learn Firebase. here are some of the key reasons why I think both frontend and mobile app developer will benefit from learning Firebase:
1) Unblocks Frontend Development
Firebase provides a ready-made backend system that frontend developer can use to hook their GUI without waiting for the backend to be ready.
2) Faster Development
Firebase provides database, authentication, payment, and API which are an integral part of any frontend application, and with that is made readily available, your development time is significantly reduced.
3) Better Code
People might argue that using Firebase means you are locked into Google Cloud Platform and you may not be able to deploy your web application or mobile app into AWS, Azure, GCP, or any other Cloud platform, but that’s not true. As long as you follow standard coding practice and separation of concern you can encapsulate interaction in service or data layer.
In most cases, developers use Firebase on the development stage with an actual backend on production. So they design their app in such a way that switching to a different backend is easier, this approach results in a better structure.
4) Speed and Simplicity
Firebase not only provides a blazing fast data storage capabilities but also a simple API, which can be tempting if you are considering to use Firebase in Production.
That’s all about why frontend developers should learn Firebase. These were just some of the most important reasons I can think of now, but there are many more. Firebase is evolving and adding more and more services so that more and more companies start using them on Production as well.
If you are learning Angular, React.js, or Vue.js then Firebase can really help you with developing projects and mastering the frontend framework of your choice.
Thanks for reading this article so far. If you like this article then please share it with your friends and colleagues. If you have any questions or feedback then please drop a note.

Business Website Builder | MobiWebApps
Transcript
mobiwebapps STARTING AT $49 Mobiwebapps WORDPRESS ONLY Premium Themes FREE WEB HOSTING BASIC SEO PLAN Business LEAD MANAGEMENT AFFORDABLE PLANS Website DON’T STOP for COVID-19 BUILD your BUSINESS at HOME with us in few steps Take your business to new heights by riding the digital wave Builder O /Mobiwebapps @MWebapps @mobiwe bappet29 Mobiwebapps Searching for Business Website Builder? Stop your search right here right now. Mobiwebapps will help you make your website at the cheapest price. Pick a template and leave everything on us. Our dedicated developer will work day and night to make your project the most successful WONDERING HOW TO BUILD A WEBSITE? GET YOUR WEBSITE IN JUST 4 EASY STEPS 1. Choose your theme Choose your plan 2. 3. We will reach out to you Professional content one. writers and designers will help your website rank better in order to bring in Your website will be 4. STARTING FROM delivered as per your $49 request A MONTH more profits. Visit our website now. O mobiwebapps.com M [email protected] STUNNING DESIGN, PERFECT QUALITY Our mission is to simplify the process that results in reducing time & cost which you would need to invest in building a website without compromising with the quality.We have vision of using innovative a START-UP VWITH МОBIWEBAPPS technologies that will help us in analyzing the trend and create products that will suit your business needs. Want to take your business online but don’t know how? STARTING FROM Visit mobiwebapps.com and order your website loade $49 attractive atures. A MONTH v Free SSI Certificate Free Web Hosting v Basic Onpage Seo v Free Content & Logo v Premium Themes Mobiwebapps mobiwebapps.com [email protected] Mobiwebapps STARTING FROM Features $49 A MONTH • Premium WordPress Themes • Free Web Hosting • Basic SEO • Social Media Integration • Free Logo and Content • Lead Management and CRM Support PREMIUM WORDPRESS THEMES ORDER YOUR WEBSITE LOADED WITH ATTRACTIVE FOLLOWING FEATURES. – FREE WEB HOSTING -PREMIUM WORDPRESS THEMES – FREE SSL CERTIFICATE – BASIC ONPAGE SEO – ВАСKUP – FREE CONTENT AND LOGO M [email protected] O mobiwebapps.com Mobiwebapps STARTING FROM THEME DESIGNS $49 WANT QUICK, EASY AND RELIABLE WEB DEVELOPMENT SOLUTION? A MONTH Not only you are saving money by choosing a complete website at lowest possible price, you are also saving your time. We build the complete website for you. So, you —- VISIT mobiwebapps.com AND GET Carine e Fer a And ur WEBSITE LOADED WITH FOLLOWING FEATURES. apital anageme Dons FREE SSL CERTIFICATE FREE CONTENT & LOGO get more time to make the best out of you. FREE WEB HOSTING PREMIUM THEMES ВАCKUP BASIC ONPAGE SEO mobiwebapps.com M [email protected] Why Choose Us Mobiwebapps Complete Business Solution TAKE YOUR BUSINESS TO NEW HEIGHTS Highly skilled Professionals • Cost Effective BY RIDING THE DIGITAL WAVE. Visit mobiwebapps.com and get your website in just 4 easy steps. Quick delivery 1. Choose your theme 2. Choose your plan • Attractive plans 3. We will reach out to 4. Your Website will be delivered as per you • Awesome features your request. STARTING FROM $49 • Best in the industry M [email protected] O mobiwebapps.com A MONTH About MobiWebApps Tmobiwebapps Mobiwebapps is a one-stop shop for all your digital needs. Whether you want to start a website from scratch or you want to market your product using digital marketing services unique business model has attracted many businesses across the globe due to the convenience and simplicity of use. trust US. Our Contact Details Address – C-205, Sebiz Square, IT – C6, Sector 67, Mohali 160062, Mobiwebapps Punjab, India Call us IND +91 -(816) 813-4735 US +1 (562) 666-3912 Email – [email protected] – Call us https://mobiwebapps.com
AWS launches Amazon Honeycode to help quickly build mobile and web apps without programming
Amazon Web Services (AWS), an Amazon.com company, announced Amazon Honeycode, a fully managed service that allows customers to quickly build powerful mobile and web applications – with no programming required.
Customers who need applications to track and manage things like process approvals, event scheduling, customer relationship management, user surveys, to-do lists, and content and inventory tracking no longer need to do so by error-prone methods like emailing spreadsheets or documents, or hiring and waiting for developers to build costly custom applications.
With Amazon Honeycode, customers can use a simple visual application builder to create highly interactive web and mobile applications backed by a powerful AWS-built database to perform tasks like tracking data over time and notifying users of changes, routing approvals, and facilitating interactive business processes.
Using Amazon Honeycode, customers can create applications that range in complexity from a task-tracking application for a small team to a project management system that manages a complex workflow for multiple teams or departments.
Customers can get started creating applications in minutes, build applications with up to 20 users for free, and only pay for the users and storage for larger applications.
Today’s customers have a growing need to track data over time, manage workflows involving multiple people, and facilitate complex business processes. For example, customers regularly perform important business functions like managing field agents, performing PO approvals, scheduling weekly events, reporting employee or team activities, tracking task progress, following customer activity, surveying end users, managing content, inventorying resources, and many more of these activities.
Many teams try to use simple spreadsheets as a Band-Aid to manage these tasks, but spreadsheets lack true database-like capabilities to sort and filter data, make collaboration with others hard to do, and are difficult to use on mobile devices.
Customers try to solve for the static nature of spreadsheets by emailing them back and forth, but all of the emailing just compounds the inefficiency because email is slow, doesn’t scale, and introduces versioning and data syncing errors. As a result, people often prefer having custom applications built, but the demand for custom programming often outstrips developer capacity, creating a situation where teams either need to wait for developers to free up or have to hire expensive consultants to build applications.
What usually happens instead is that these applications just never get built. The chasm between using spreadsheets and building custom applications creates a situation where customers often experience unnecessary inefficiency, waste, and inaction.
What customers want is the ability to create applications using the simplicity and familiarity of a spreadsheet, but with the data management capability of a database, the collaboration and notifications common in business applications, and a truly seamless web and mobile user experience. That’s what Amazon Honeycode delivers.
Amazon Honeycode relies on the familiar interface of a spreadsheet, but under the hood, offers the power of an AWS-developed database, so customers can easily sort, filter, and link data together to create data-driven, interactive applications.
Users can easily create dynamic views and dashboards that are updated in real-time as the underlying data changes – something that is hard to do even with powerful relational databases.
Applications built using Amazon Honeycode leverage the full power and scale of AWS, and can easily scale up to 100,000 rows in each workbook, without users having to worry about building, managing, and maintaining the underlying hardware and software.
Amazon Honeycode does all of this under the covers by automating the process of building and linking the three tiers of functionality found in most business applications (database, business logic, and user interface), and then deploying fully interactive web and mobile applications to end users so customers can focus on creating great applications without having to worry about writing code or scaling infrastructure.
In Amazon Honeycode, customers can get started by selecting a pre-built template, where the data model, business logic, and applications are pre-defined and ready-to-use (e.g. PO approvals, time-off reporting, inventory management, etc.). Or, they can import data into a blank workbook, use the familiar spreadsheet interface to define the data model, and design the application screens with objects like lists, buttons, and input fields.
Builders can also add automations to their applications to drive notifications, reminders, approvals, and other actions based on conditions. Once the application is built, customers simply click a button to share it with team members.
With Amazon Honeycode, customers can quickly and easily build multi-user, scalable, and collaborative web and mobile applications that allow them to act on the data that would otherwise be locked away in static spreadsheets.
“Customers have told us that the need for custom applications far outstrips the capacity of developers to create them,” said Larry Augustin, Vice President, Amazon Web Services, Inc. “Now with Amazon Honeycode, almost anyone can create powerful custom mobile and web applications without the need to write code.”
Slack is the leading channel-based messaging platform. “We’re excited about the opportunity that Amazon Honeycode creates for teams to build apps to drive and adapt to today’s ever-changing business landscape,” said Brad Armstrong, VP of Business and Corporate Development, Slack.
“We see Amazon Honeycode as a great complement and extension to Slack and are excited about the opportunity to work together to create ways for our joint customers to work more efficiently and to do more with their data than ever before.”
SmugMug is a paid image sharing, image hosting service, and online video platform on which users can upload photos and videos. “We are excited to see the opportunity that Amazon Honeycode creates for our teams to build applications that help them respond to changing business conditions,” said Don MacAskill, CEO & Chief Geek, SmugMug & Flickr.
“Based upon how easy it is to create new applications, it should really help our teams, and we can see it really taking off.”

Gadgets 360 All Mobile Brands wherever …
Transcript
ADDRESS Rich E. Sanchez 117 WEST 13 ST. NEW YORK, NY 10011 CONTACT 908.930.3253 [email protected] FOLLOW @RICH_ESANCHEZ BEHANCE.NET/RICHESANCHEZ ATTENTIVE OBSERVER / CREATIVE THINKER / DIGITAL PRODUCER O Experience 8 Purpose Secure a position where I am challenged to think creatively to impact problems and behaviors for people like you and me!” Creative Strategist / Analyst Johnson & Johnson, Creative Lab 3/2014 – today Leading ideation and vendor relationship to improve employee learning on Business Intelligence (BI), Analytics and Big Data using gamification mechanics & components. Responsibilities include: researching, pitching, prototyping and vendor engagements. O Languages 5/2013 – Digital Producer / Analyst Johnson & Johnson, IT Services for Medical Devices & Diagnostics 3/2014 Coordinated and led complete end to end production of various digital IT initiatives including: web development, website redesign, interactives, mobile, e-marketing and video production. Responsibilities included: requirements gathering, scope defination, ideation, wireframing, creative direction, status reporting, QA and analytics. English Spanish Guarani Marketing Media Manager Johnson & Johnson, TEDXJNJ Skills 8/2013 – 2/2014 novice competent expert Managed virtual team on the creation and digital distribution of promotional media assets for marketing campaigns at 38 sites in NA and 32 sites in EMEA, LA and ASPAC. Responsibilities included: virtual collaboration, creative direction and liaising. Adobe Creative Suite Google Analytics HTML/CSS 6/2012 – • Web Producer / Associate Analyst Johnson & Johnson, IT Services for Corporate 4/2013 Mac OSX/Windows Led offshore development resources through design, development, QA testing and timely launch of an internal web application; improved overall user experience (UX) by 30%. Responsibilities included: surveying customers, wireframing, SCRUM meetings, QA, hosting training sessions and creating video tutorials. Creative Thinking Client Relationship Project Management Project Manager / Marketing & Communication Lead Stevens Institute of Technology, SeatFinder 9/2011 Data Visualization 4/2012 novice competent expert Led cross functional team to design, develop, test, implement and pitch a mobile / web solution for early detection of available seats in library. Responsibilities included: project planning and proposals, designing creative presentations, public relations and branding. A Adventures 5/2010 – • Event Coordinator 17 9 3 9/2010 Stevens Institute of Technology, Student Life Planned and ran university’s orientation program for incoming class of 520 freshmen. Additionally, motivated and led team of 50 student leaders to perform orientation duties. U.S. states Countries World Cups Creative Producer (Freelance) Stevens Institute of Technology, Student Organizations 2/2008 – 3/2012 Interests Curated creative content for various brands and campaigns including: video production, motion graphics, apparel design, web design and print design. Electronic Futbol Dance Fanatic O Education Comedic Tech Movies Trends 9/2007 – • Stevens Institute of Technology Bachelors of Computer Engineering Graduated with Honors; 3.5 GPA 5/2012 Soft Technical

Positive Of Signing Up With Our Web Design Company in Indianapolis
Transcript
NEXBIT ΝΕΧΒIT Empowering E Business POSITIVE EFFECTS OF SIGNING UP WITH OUR WEB DESIGN WE CREATE PERSONALIZE WEBSITE TO YOUR BRANDING Personalize websites are the best suited for doing business online. You can easily make changes as per your current requirement or as the change in the technology. WE DESIGN A WEBSITES IN A PROFESSIONAL WAY Working with our Nexbit Web Design team, we will ensure that your new high quality, responsive and quick websites will display your level of professionalism WE HAVE TOP QUALITY WEBSITE DEVELOPERS Web Design and hosting, Nexbit is based on web design , web development, SEO company in Indianapolis and digital marketing business have an amazing team of websites developers that will help you to create your dream websites TOP CLASS SUPPORT PROVIDED BY A PASSIONATE TEAM At Nexbit, our digital marketing business offers reasonable websites and reasonable web hosting services. Contact us today to find it more For More : https://www.nexbit.us/

GitHub redesign goes mobile-friendly – to chagrin of devs who shockingly do a lot of work on proper computers
GitHub has redesigned its web repository layout for an “improved mobile web experience”, but developers were quick to find flaws in the new approach.
GitHub said that its new design has three key features. First, a responsive layout to improve usability on mobile web browsers. Second, a repository sidebar for surfacing “more content”. Third, the ability to show and hide releases, packages and environments in the repository sidebar.
The team promises that this will be the foundation for future improvements to accessibility along with a dark mode option.
Much usage of GitHub does not require visiting the site. Integration with IDEs and code editors is strong, or you can use the GitHub Desktop application, or direct Git commands. Many important open-source repositories have a bare-bones web UI, like this one for LibreOffice. Performance is good and developers have all they need for managing code.
Sites like GitHub and rival GitLab, however, have ambitions beyond hosting code. They are constantly adding DevOps features, like GitHub’s Workflow Templates, designed to make it easy to get started with Actions, GitHub’s mechanism for automating code build and deployment. There is also the matter of collaboration with team members who are not developers, but may need to engage with issues and discussions that are part of a project.
The new design does look better on an iPhone, but developers with big screens are not so impressed. “The single worst change is that you can’t see the latest commit status from the repo screen. Instead, you get the commit hash, and have to click a tiny ellipsis button to get the commit message and the status indicator,” said one developer on Hacker News, winning a response from GitHub CEO Nat Friedman, who said: “This is something we should definitely fix.”
There is also a suspicion that GitHub is going for prettiness over information density, though Friedman said: “I don’t think there is a principle of lowering information density at work here. I think it’s just a design that we will keep iterating. We are pro information density at GitHub.”
Too much white space is a common complaint. “Large patches of white really hurt to look at for any real amount of time,” said another developer, though Friedman confirmed that dark mode is on the way. Another issue is the width of the layout. “They place content at extreme ends of the screen, completely stretched out like a rubber band,” said one.
Some feel that the redesign was rushed without any real consultation. It was an optional preview for a short time, but “you just rolled it out for a couple of weeks basically to see if there were any showstopper bugs before you went live,” complained a user. Another challenged, “In what ways do you consider the new design to be better?”
It is not all negative, though. “It’s a positive improvement. Better information layout, and the code is still front and center,” said another commenter.
The introduction of the new design highlights an oddity about the Microsoft-owned source code management and developer collaboration service, which is that its own development is more top-down than collaborative. “Where do I post feature requests for GitHub?” asked a user in November 2018. GitHub then introduced a feedback form, where requests go into the black hole of “We read and evaluate all feedback carefully, but we may not be able to respond to every submission.”
Microsoft at least has its UserVoice forums where users can both submit feedback and vote on feedback from others – like this one for Office 365 or this for Azure.
Strange to say, but GitHub could do with a little more collaboration. ®
Sponsored:
Kubernetes: Your Hybrid Cloud Strategy

‘BlueLeaks’ Exposes Files From Hundreds of Police Departments
Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!
‘BlueLeaks’ Exposes Files From Hundreds of Police Departments (krebsonsecurity.com)
Posted
by
from the another-day-another-leak dept.
New submitter bmimatt shares a report from Krebs On Security: Hundreds of thousands of potentially sensitive files from police departments across the United States were leaked online last week. The collection, dubbed “BlueLeaks” and made searchable online, stems from a security breach at a Texas web design and hosting company that maintains a number of state law enforcement data-sharing portals. The collection — nearly 270 gigabytes in total — is the latest release from Distributed Denial of Secrets (DDoSecrets), an alternative to Wikileaks that publishes caches of previously secret data.
In a post on Twitter, DDoSecrets said the BlueLeaks archive indexes “ten years of data from over 200 police departments, fusion centers and other law enforcement training and support resources,” and that “among the hundreds of thousands of documents are police and FBI reports, bulletins, guides and more.” KrebsOnSecurity obtained an internal June 20 analysis by the National Fusion Center Association (NFCA), which confirmed the validity of the leaked data. The NFCA alert noted that the dates of the files in the leak actually span nearly 24 years — from August 1996 through June 19, 2020 — and that the documents include names, email addresses, phone numbers, PDF documents, images, and a large number of text, video, CSV and ZIP files. The NFCA said it appears the data published by BlueLeaks was taken after a security breach at Netsential, a Houston-based web development firm.
Real computer scientists don’t comment their code. The identifiers are
so long they can’t afford the disk space.
Working…

What Is Data Center Security? 6 Ways to Ensure Your Interests Are Protected
Protect your servers and data with these data center security best practices from 7 IT and cybersecurity experts
Imagine that you have a stack of gold bars and you’re responsible for protecting it. Would you leave it out in the open where any thief could get their hands on it, or would you keep it under lock and key?
This same analogy applies to your data center — a virtual goldmine of information — yet many companies choose to do the minimum when it comes to data center security. Your data center — the networked computer servers and devices that process, distribute and store your precious data — is a critical component of your organization’s digital infrastructure. Data center security is the combination of policies, processes, procedures, and technologies that secure it from cyber attacks and other virtual threats.
So, what are the data security standards you should know to meet and maintain compliance? We’ve consulted several IT and cybersecurity experts to pick their brains and share their data center security best practices.
Let’s hash it out.
The Importance of Data Center Security Continues to Grow

It’s no surprise that the security of your data is crucial for any business. It’s invaluable info that can make or break your business. Proprietary information such as intellectual property and trade secrets, as well as customers’ personal and financial information are all examples of the types of data that might be found within a data center.
Intentional or accidental data exposure can lead to:
- Reputational damage and loss of customer trust — If word gets out that you’re not taking the necessary steps to protect your customers’ data (or even your own intellectual property), why should they trust you?
- Noncompliance fines from industry regulations — There are several key regulations that have requirements related to data center security, including PCI DSS, HIPAA, GDPR, SAE 18 (formerly SAE 16), and ISO 27001: 2013.
- Financial damages and loss revenue — Downtime is a major concern for businesses and can result in significant revenue losses.
Shayne Sherman, CEO of TechLoris, says the importance of data center security can’t be overstated and that it should be a top priority for every business.
“Taking the time to make sure the building is secure, your employees are well-versed in cyber security prevention, and that you’re meeting compliance requirements goes a long way in protecting your assets from malicious actors.”
— Shayne Sherman, CEO of TechLoris
So, needless to say, you’ll find yourself in hot water if any of this information winds up in the wrong hands. This is why you need to know some data center security best practices that you can put into action.
Tip #1: Implement Data Center Physical Security Measures
When people think of the types of security measures that they have in place to protect their organization’s data, they don’t necessarily consider the physical security aspect. Why? They’re often too preoccupied with concerns relating to data loss risks that stem from cyber attacks and data breaches.
However, what companies may not realize is that physical security threats can be some of the most impactful. One such example would be the case of Anthony Levandowski, a former Google engineer who has plead guilty to stealing the company’s trade secrets and giving them to Uber.
According to an article in the New Yorker, Levandowski accessed Google’s servers directly to carry out the theft:
“According to Google, a month before Levandowski resigned, he had plugged his work-issued laptop into a Google server and downloaded about fourteen thousand files, including hardware schematics. He transferred the files to an external drive and then wiped his laptop clean.”
There are a few main types of data centers that an organization can have based on its needs and available resources:
- Public cloud data center — This type of data center is one that’s off premises and is hosted by public cloud providers such as IBM Cloud, Amazon Web Services (AWS), and other tech giants. There’s a lot of debate within the industry about how secure these platforms are despite growing adoption, but many of those issues are at the customer level (such as server misconfigurations) and are not at the provider level.
- Private managed hosting data center — This type of data center is one in which you are sharing servers with other companies and organizations. This is great for companies that have limited tech expertise or can’t afford a lot of capital expenditure costs up front. However, it’s not necessarily the most secure option.
- Colocation data center — This type of data center is one in which a company shares space with other companies, but own their own severs and other equipment. This offers more protection for your data than managed hosting data centers because you own your own equipment and aren’t sharing it with other organizations.
- On-site data center — This type of data center is one that you house within your own facility. Having an on-premises data center offers the greatest level of security but also has significantly higher operational costs than other data storage options.
With each being so different, it means that the security needs of each type are different. So, what should be the first data center security consideration?
Location, Location, Location
If you’re creating your own data center and aren’t relying on a cloud or colocation data center, intentionally planning out the physical space of your data center is essential. This includes deciding whether you want your data center to be in a secluded location or a more populated area.
But what else should you keep in mind when planning a data center location in terms of security? Be conscientious of weather-related dangers and low-lying areas. (We’ve found that floodwaters and technology aren’t a great mix.). Also be sure to watch out for hot geological zones that are earthquake prone.
If you’re going to build in a more populated area, you can hide your data center in plain sight by making it blend in with its surroundings.
If you’re using a service provider’s facility, check out the construction and location of their building. You can also request compliance reports to see how they measure up.
Key Data Center Physical Security Measures
But aside from the location, there are many other physical security considerations. Data center hardening can include:
- Reinforced concrete walls and structures that can protect the facility from external attacks
- Server cabinets and cages that are bolted into the ground and secured with locks
- Environmental controls that monitor and regulate temperature and humidity changes
Mark Soto, owner of the cybersecurity and IT services company Cybericus, is quick to state that although physical attacks aren’t as common as cyber attacks, they’re still very real threats to your data security.
“You need to set up security measures around the data center to make sure that it’s secure. This can be either through a badge system or a pin pad to only allow certain people with access to these locations.
Be fully aware of the people that pass through the facility. As mentioned above, 30% of data breaches are due to internal users. You should be very careful as a company on who has access to the data center and what parts they have access to. This can involve anything from performing background checks on employees, and third party contractors who have access to your data center facilities.”
— Mark Soto, owner of Cybericus
Ben Hartwig, a web operations executive at InfoTracer, says that you need to consider the physical design of your facility to truly gauge your data center security.
“A main concern is the building or facility design itself when it comes to physical security. Key points of physical security include 24/7 video surveillance, metal detectors and on-site security guards, as well as layered security measures, security checkpoints, customized to reflect the sensitivity of the protected data, limited or single entry and exit points, and more.”
— Ben Hartwig, Web Operations Executive at InfoTracer
Some types of data centers also have additional physical requirements such as those outlined by the Telecommunications Industry Association (TIA) in their data standards ANSI/TIA-942/TIA-942A.
Hartwig also suggests taking traditional security measures to the next level. Some methods include using multifold access controls and enforcing specialized security methods in every area and room.
“Every individually-secured zone should demand more than one form of identification and pass control, since not all employees ought to have access to every part of a data center.
Use access cards and identification badges, or other protection which includes scales that weigh visitors upon entering and exiting the premises, continuous background checks of authorized staff and biometric locks.”
— Ben Hartwig, Web Operations Executive at InfoTracer
Tip #2: Monitor and Restrict Not Just Physical Access But Virtual Access As Well
But securing your data requires more than just installing door locks and cameras. You actually need to monitor the digital access as well. Why? Of the data breaches reported in IBM and the Ponemon Institute’s 2019 Cost of a Data Breach Report, 49% of them were identified as resulting human errors and system glitches and not cyber attacks.
Ross Thomas, IT administrator here at The SSL Store, says that one of the more obvious data center security best practices is to review the permissions that are set for any users who have access to your servers.
“Periodic permission auditing is crucial to make sure that access is only delegated to those that need it. Root users can be very dangerous as they are able to make any changes or execute any code or processes. But, root users are necessary. Assigning processes, tasks, etc., to the correct user is the absolute safest way to delegate processes. When personnel leave an organization, there should be proper evaluation of their status in all systems to determine if they have access even if it is not through the front door.”
— Ross Thomas, IT administrator at The SSL Store
And if you weren’t already concerned about phishing scams and password insecurities, you should be. Verizon’s 2020 Data Breach Investigations Report (DBIR) shows that four in five hacking-related breaches involve brute force or the use of lost or stolen credentials.

Don’t Get Phished.
Email is the most commonly exploited attack vector, costing organizations millions annually. And for SMBs, the damage can be fatal in terms of suffering data breaches & going out of business. Don’t be another statistic.
So, if you can’t automatically trust that your users are who they claim to be, what’s the solution?
Adopt a Zero Trust Approach
Sami Ullah, pre-sales manager at Kualitatem Inc., an independent software testing and information systems auditing company, says that organizations should implement a zero-trust architecture:
“The Zero Trust Model treats every transaction, movement, or iteration of data as suspicious. It’s one of the latest intrusion detection methods. The system tracks network behavior, and data flows from a command center in real time. It checks anyone extracting data from the system and alerts staff or revokes rights from accounts [if] an anomaly is detected.”
— Sami Ullah, pre-sales manager at Kualitatem Inc.
Tip #3: Use the Right Tools to Secure Your Data and Network
A strong data center security strategy is one that uses perimeter-based security tools to monitor and protect your network from internal and external threats. Part of this approach is to properly configure and secure your endpoints, networks, and firewalls (this is the heart of security).
Vladlen Shulepov, CEO at the international software development company Riseapps, highlights several of the key monitoring and detection tools that should be in your security arsenal:
“External threats are usually the worst enemy of a data center, so protective solutions are necessary. Intrusion detection systems, IP address monitoring, and firewalls are some of the most helpful tools to protect your data center from outside breaches and ensure its security.”
— Vladlen Shulepov, CEO at Riseapps
Ross Thomas, IT administrator at The SSL Store, says that using reverse proxies is also a great option. A reverse proxy acts like a front-line cache that accesses static and dynamic content rather than letting users directly access a webserver or database server for every request.
“Adding a reverse proxy to sit in front of a webserver is a good idea for security. It disassociates the public from directly accessing a webserver that contains production code or a means to get to valuable information, such as a database. It can also offload some of the processing and functionality to allow the primary server to operate at full (or near full) potential. A reverse proxy is not too different from a load balancer and can often be one in the same depending on the server structure (clustering, for example). In any event, it is a safe bet to protect valuable production code/data.”
— Ross Thomas, IT administrator at The SSL Store
If you want to further harden your data center’s cyber defenses, you can (and should):
- Conduct regular audits of your assets, security management processes and access protocols.
- Use network-level encryption to secure your data as it travels between endpoints and server-level encryption to protect the data when it’s at rest.
- Integrate automation and security information and event management (SIEM) tools (or use a third-party service) to continually monitor logs and report on security events and threats.
Tip #4: Keep Your Servers and Systems Current
No one likes taking the time out of their day to run boring updates and to apply patching to their systems. After all, you have way more important things to do, right?
We’re pretty sure that the owners of the 230,000 computers that were affected by the WannaCry ransomware attacks a few years ago would disagree. In those attacks, a hacker group used the NSA’s EternalBlue exploit — which Microsoft had patched but WannaCry victims hadn’t applied to their machines — to their advantage to take over computers at organizations and businesses around the world, including the U.K.’s National Health System (NHS).
When manufacturers release patches, it’s their way of filling in any security gaps that they’ve discovered in their products. It’s like patching a hole in your roof to prevent rain from pouring or leaking through. It’s their way of fixing the vulnerability before a bad guy can exploit it and cause issues.
Simply put, patching and updating your systems can save you a lot of headaches in the long run:
“Make sure your servers remain patched and on the latest software releases. This is the easiest way to protect yourself from known vulnerabilities. Don’t get breached because of something that’s already had a fix.”
— Jayant Shukla, CTO and Co-Founder, K2 Cyber Security
Tip #5: Have Redundant Data Backups and Infrastructure in Place
No matter how many times we talk about data backups, it never seems to be enough. You read in the headlines about how major city governments, hospitals and businesses are left paralyzed by ransomware attacks and other cyber attacks. Yet, for some reason, businesses choose to not take the appropriate precautions for creating redundant data backups.
Is it laziness? Maybe it’s the “it won’t happen to me” mindset. Regardless of the excuses why they shouldn’t, the truth of the matter is that having redundant backups — both in terms of data and secondary infrastructure — in place can save you a lot of time, money, and headaches. When crap hits the fan — and, inevitably, it will — you’ll wish that you’d taken the time to prepare.
I think Hartwig summarizes this next point best:
“Data security and data center security are inseparable. To store and protect data effectively, all data has to be strongly encoded during transfer and always monitored and regularly backed up.”
— Ben Hartwig, Web Operations Executive at InfoTracer
Of course, there are other things that he says are essential in terms of protecting and keeping your infrastructure operational (as well as maintaining uptime):
- Keep your equipment cool. Your data center runs on a variety of hardware — all of which generate a vast amount of heat. High temperatures that are left unchecked can literally cause machines to breakdown and melt or result in fires, so it’s essential for every data center to use strong climate controls. Part of this includes having secondary cooling systems in place that can kick in should the primary system fail.
- Protect your power supply. Outages can happen for a variety of reasons — everything from human error to issues relating to the weather. They can also result from power losses or short power surges. Regardless of the cause, it means that you need to have backup power systems in place that can kick into gear when things go wrong to keep your equipment and servers functioning.
A last important point worth mentioning is to keep water lines separate from other key systems. Few things can ruin your day like a busted water main. So, be sure to have two lines coming into your facility in different locations, but keep them away from your power sources and other critical infrastructure.
Tip #6: Use Data Center Network Segmentation
Network segmentation is a process that, basically, helps you divide your data network into separate components based on endpoint identity. By dividing the network and isolating each segment independently, it creates additional barriers for hackers to have to get through and prevents hackers from freely roaming around your network.
Mark Soto, whose cybersecurity and IT services company helps businesses whose data centers have been hacked, offers some key insights on what you can do to prevent being attacked and to limit the damage in the event that an attack is successful:
“By using network segmentation, it can help prevent your entire system from getting compromised if hackers are able to access one of your networks. It also gives you time to react in the worst-case scenario where the other networks are also in danger of being hacked.
With network segmentation, you can also specify which network resources your users have access to. In a world where malicious internal users make up at least 30% of data breaches, this might be the biggest benefit of network segmentation.”
— Mark Soto, owner of Cybericus
Final Thoughts on Data Center Security
Businesses run on data, and your ability to keep that data safe can make or break your organization.
Your data center is the place where your network computers, servers and other essential components are stored. It’s your data’s safe haven in the midst of a disaster.
Keep your servers, network, and other related equipment as safe as humanly possible by implementing the following data center security best practices:
- Put physical security measures in place that prevent bad guys from physically gaining access to your network and data storage equipment.
- Implement and enforce access restrictions that ensure only those who need access (both physical and virtual) have it.
- Use the right security tools to report on and protect against many digital security threats.
- Keep everything up to date and patched to eliminate known vulnerabilities.
- Have secondary systems and data backups in place that you can rely on when things go south.
If you’re exploring the idea of using a cloud or managed hosting service provider, you have less control over the physical security measures that are in place than you would with an in-house data center. However, you can ask the service provider to provide you with compliance reports, which can help you feel more confident in their security capabilities.
We’re sure that you have additional suggestions for data center security, and we’d love to hear ‘em. Be sure to share your insights and suggestions in the comments below.

BigCommerce Provides Dedicated Technical Account Manager to Americaneagle.com
CHICAGO, June 24, 2020 /PRNewswire/ — Americaneagle.com, a full-service, global digital agency, has just strengthened its strategic alliance with one of its top partners. Leading SaaS-based ecommerce platform, BigCommerce is now providing additional support to Americaneagle.com through a dedicated Technical Account Manager (TAM). With this TAM and Americaneagle.com’s Elite Partner status, agency clients that are on BigCommerce can get the support they need for complex builds, implementations, upgrades, and troubleshooting all from one place.
Acting as an advocate for Americaneagle.com clients, the BigCommerce Technical Account Manager will provide guidance and operational management. From the early sales process to production and post-launch, the TAM will assist the Americaneagle.com team with platform configurations, escalate and prioritize cases, give key recommendations for implementations, and provide expert advice on upcoming releases and enhancements. All of these benefits give clients further peace of mind and a greater return on their BigCommerce investment.
Jon Elslager, Americaneagle.com’s BigCommerce Practice Manager said: “For our customers at Americaneagle.com having access to a Technical Account Manager give us the needed assistance within BigCommerce to escalate changes and to keep a pulse on all things new to the platform. Moreover, it helps us to accelerate the rate at which we deliver projects and allows us to more easily stay on time and budget.”
Americaneagle.com has been a BigCommerce partner for over 5 years, launching several large scale implementations for clients like Berlin Packaging, Carson Dellosa, and Ohio State University. As an Elite Partner, the team has been at the forefront of the platform’s enhancements and along with developing several connectors and tools within the BigCommerce marketplace. The TAM will amplify all of these efforts and strengthen the agency’s tight-knit partnership with BigCommerce.
About Americaneagle.com
Americaneagle.com is a full-service, global digital agency based in Des Plaines, Illinois that provides best-in-class web design, development, hosting, post-launch support and digital marketing services. Currently, Americaneagle.com employs 500+ professionals in offices around the world including Chicago, Cleveland, Dallas, London, Los Angeles, New York, Nashville, Washington DC, Switzerland, and Bulgaria. Some of their 2,000+ clients include Berlin Packaging, Delasco, The Ohio State University, Stuart Weitzman, WeatherTech, and Monticello. For additional information, visit www.americaneagle.com
Contact
Michael Svanascini, President
[email protected]
847-699-0300
SOURCE Americaneagle.com
Related Links
Mosaic Warfare (the next big thing)
Mosaic Warfare (the next big thing)
Air Force Magazine ^
| Nov. 1, 2019
| David A. Deptula USAF (Ret.) and Heather Penney
Posted on 06/24/2020 3: 08: 40 AM PDT by robowombat
Mosaic Warfare
By David A. Deptula USAF (Ret.) and Heather Penney Nov. 1, 2019
Ever since 1991s Operation Desert Storm, adversaries have systematically watched the American way of war, cataloging the US militarys advantages and methods and developing strategies and systems to erode those advantages and exploit vulnerabilities in US force design. Now America faces challenges from China and Russia, each of which have watched and learned from US strategy in Iraq and Afghanistan and have responded by developing anti-access/area-denial (A2/AD) strategies and systems designed to block the United States from intervening should they choose to aggress against their neighbors.
The National Defense Strategy in 2018 sounded the alarm over the risks posed by Chinese and Russian revisionist ambitions. Wargames that centered on major conflicts with China and Russia have resulted in loss after loss for US forces. According to senior RAND analyst David Ochmanek, In our games, when we fight Russia and China, blue gets its ass handed to it.
An F-35 with USAFs Lightning II Demonstration Team performs aerobatics in September. In the mosaic concept, F-35s and other highly integrated platforms would operate in close cooperation with single-function platforms to create a complete, interconnected, and changeable web of systems. Photo: SrA. Alexander Cook To overcome, the US military must transform itself to a new force design that can withstand and prevail in a systems warfare conflict. Mosaic warfare is one answer: a way of war that leverages the power of information networks, advanced processing, and disaggregated functionality to restore Americas military competitiveness in peer-to-peer conflict.
Mosaic is designed to address both the demands of the future strategic environment and the shortcomings of the current force. The term mosaic reflects how smaller force structure elements can be rearranged into many different configurations or force presentations. Like the small, dissimilar colored tiles that artists use to compose any number of images, a mosaic force design employs many diverse, disaggregated platforms in collaboration with current forces to craft an operational system.
Mosaic employs highly resilient networks of redundant nodes to obtain multiple kill paths and make the overall system more survivable, minimizing the critical target value of any single node on the network. This design ensures US forces can be effective in contested environments and that the resulting force can be highly adaptable across the spectrum of military operations. Mosaic combines the attributes of highly capable, high-end systems with the volume and agility afforded by smaller, less costly, and more numerous force elements, which can be rearranged into many different configurations or presentations. When composed together into a mosaic force, these smaller elements complete operational observeorientdecideact cycles (John Boyds OODA loops) and kill chains. Just like LEGO blocks that nearly universally fit together, mosaic forces can be pieced together in a way to create packages that can effectively target an adversarys system with just-enough overmatch to succeed.
A Chinese military unit fires a surface-to-air missile during a live-fire test in June. Advanced anti-access/area denial (A2/AD) threats are driving the need for a new approach to warfare. Photo: Li Xiaopeng/China Ministry of Defense CHINAS SYSTEMS-CONFRONTATION WARFARE
Mosaic is conceived, in particular, as a response to the burgeoning threat posed by China, which has carefully designed its systems warfare strategy to counter Americas traditional way of war. Chinas A2/AD capabilities are designed to block Americas physical access to combat zones and negate its ability to maneuver. Yet these systems do not merely pose technical and operational challenges; rather, according to Elbridge A. Colby, one of the authors of the National Defense Strategy, China intends to employ them to achieve strategic-level effects, rendering the most critical elements of US operations ineffective.
The overwhelming effectiveness of the United States in Operation Desert Storm precipitated a major shift in Chinese military theory. China scholar M. Taylor Fravel notes: Chinas intensive study of the United States through the 1990s, especially toward the end of the decade, was intended to identify weaknesses that could be exploited, in addition to areas to copy. As a result, China envisions targeting US data links, disrupting information flows, denying command and control, and kinetically targeting physical nodes of US information systems, with the goal of systematically blinding US commanders and paralyzing their operations.
As Colby suggests, the Chinese A2/AD complex is not just an integrated air defense system, but more importantly a critical piece of a larger strategy to target and defeat US forces as a system. RAND analyst Jeffrey Engstrom calls this strategy system confrontation and its theory of victory system destruction warfare. In combat operations, he says, PLA planners specifically seek to strike four types of targets, through either kinetic or nonkinetic attacks, when attempting to paralyze the enemys operational system.
These attacks encompass:
Information. Degrading or disrupting the flow of information in the adversarys operational system by targeting networks, data links, and key nodes to leave elements of the operational system information-isolated and thus ineffective. High Value Assets. Targeting the key nodes or functionalities within the adversarys operational system, including command and control, ISR, and firepower: If the essential elements of the system fail or make mistakes, the essence of the system will [become] nonfunctional or useless. Operations. Degrading or disrupting the operational architecture of the adversarys operational system seeks to disrupt how elements of an adversarys system collaborate and support each other. Speed. Distorting and extending the adversarys time sequence or operational tempo (the OODA loop) aims to induce friction, confusion, and chaos by employing deception, creating nodal failures and network and data link outages to cause stutter at any stage in the decision loop or kill chain.
Lt. Col. Christina Darveau (right) trains 1st Lt. Crystal Na onboard an E-8C JSTARS aircraft at Robins AFB, Ga. Todays JSTARS aircraft center battle management on one potentially vulnerable platform; mosaic seeks to make that capability more survivable by spreading the capability across the fleet. Photo: TSgt. Nancy Goldberger/ANG Lt. Col. Christina Darveau (right) trains 1st Lt. Crystal Na onboard an E-8C JSTARS aircraft at Robins AFB, Ga. Todays JSTARS aircraft center battle management on one potentially vulnerable platform; mosaic seeks to make that capability more survivable by spreading the capability across the fleet. Photo: TSgt. Nancy Goldberger/ANG
THE TRANSFORMATION IMPERATIVE
Future adversaries will learn from Chinas progress in maturing a systems warfare theory that targets US force design and operations, so systems warfare will not be limited to China over the long term. The Department of Defense should consider systems confrontation and systems destruction warfare as leading indicators, therefore, of how peer and near-peer adversaries could hold at risk US forces and operational architectures in the future.
Americas current way of war is vulnerable to this kind of systems warfare because of decisions made in the wake of the dramatic and overwhelming victory of the air campaign in the 1991 Gulf War. Afterward, DOD chose not to invest in maturing its own systems warfare strategy. Consequently, the US military today is unprepared for this emerging threat.
Compounding the problem is the dramatic downsizing of the Air Force after the fall of the Soviet Union. Had the Air Force been allowed to procure planned numbers of B-2s and F-22s; had it been allowed to pursue the Next-Generation Bomber in 2008 as programmed; and had it been allowed to maintain the pace of purchases of F-35s as originally planned, the risk posed by these peer threats today might not be so dire. There would be sufficient force structure to provide strategic depth in response. But nearly 30 years of budget-driven cuts have left the Air Force with margins that are too thin to face a peer threat, much less one employing a systems warfare strategy.
All the military services are in serious need of recapitalization today, but none more so than the Air Force, which is smaller and older than it has ever been in its history. Having spent the last 17 years operating in extremely permissive environments, it now finds itself too small, its information systems too brittle, and its command and control too centralized to withstand systems warfare. US force design therefore must be mapped to how US enemies intend to fight and to fill the resulting gaps in the current US force.
The problems plaguing todays force include:
Small inventories of capable, high-end multifunction platforms that make US operational architectures too vulnerable. The continued practice of buying multiple kinds of high-end weapon systems, but all in such limited numbers that their purchase is neither efficient nor able to provide the force capacity needed for great power conflict. Slow development and fielding for major new weapon systems. Difficulty scaling current force design appropriately across the spectrum of conflict. Critical shortages in key capabilities, such that the current force cannot withstand attrition and survivability factors threaten to outweigh the ability to create effects in future wars. Without significant changes, neither the ways nor the means available to US forces will be sufficient to accomplish the ends outlined in the 2018 National Defense Strategy. The US military must reinvigorate the theory of systems warfare first manifested during Operation Desert Storm. Toward that end, mosaic warfare offers a new force design for optimizing US forces and operational concepts for the systems warfare of the future, rather than for the conflicts of the past.
MOSAIC WARFARES KILL WEB
In conventional warfare, the kill chain is defined by the OODA loop that is, the steps necessary to observe, orient, decide, and act on a target. But in a mosaic operational construct, the point-to-point chain is replaced by a web of sensor nodes that all collect, prioritize, process, and share data, then fuse it into a continuously updated common operating picture. Instead of tightly integrating all those functions into a single, expensive platform, as in the F-35, in mosaic warfare, these functions are disaggregated and spread among a multitude of manned and unmanned aircraft that share data and processing functions across a perpetually changing network.
Click here or on the image above to view the Mitchell Institutes full-sized Mosaic Warfares Kill Web’ infographic. Graphic: Zaur Eylanbekov/staff. Expand Photo MOSAIC: A FORCE DESIGN FOR SYSTEMS WARFARE
In the mosaic concept, platforms are decomposed into their smallest practical functions to create collaborative nodes. These functions and nodes may be abstracted and broadly categorized by the familiar functionalities in an OODA loop: observe, orient, decide, and act.
In the past, an F-15 in an air-to-air engagement would need to first observe the airspace in its lane, identifying enemy aircraft with its radar, which is an observation node. When the radar received a return, that contact would be processed through the fire-control computer and displayed on the screen; together, these comprise the orientation node. The pilot can then engage other on-board sensors (additional observation nodes) to improve his orientation before deciding on an action (making the pilot the decision node). Finally, the pilot can take action, pairing a missile to the contact and firing the weapon (the action node).
Up until now, increasing the speed of operations required that all these OODA functions had to be hosted on a single weapon system to complete a kill chain. Indeed, fifth-generation aircraft have accelerated this process by pushing orientation and decision closer to action at the forward edges of combat. Advances in processing power, algorithms, and data links have made these aircraft incredibly valuable battle managers in contested and dynamic environments.
Historical case studies show that orientation must be located where there is processing capacity to filter, correlate, and fuse observations into meaning, or orientation. The closer orientation and decision nodes are to the point of action, the faster and more effective the outcomes.
Today, however, advanced data links and processing make it possible to integrate these functions even as they are disaggregated into distinct platforms. Thus, these functions can be distributed throughout the battlespace and integrated not in a single platform, but over distance through data links, to achieve effects.
Conceptualizing mosaic through an abstracted, notional operational architecturewhere functionality is the focus, not specific technologies or platformsenables the development of a more heterogeneous force and technological growth. This is a critical point: Being overly prescriptive with regard to technology risks condemning a force design to rigidity, brittleness, and/or obsolescence.
The design should support both multifunction platformshosting many different functionalitiesand simple-function nodes hosting just one or two. When pieced together, these smaller functional elements can form operational OODA cycles that today must be managed within a single system. Leveraging advanced networks, data links, and enablers such as artificial intelligence/machine learning, a mosaic design can target adversary systems with just enough overmatch to succeed.
Built on adaptable and highly resilient networks with redundant nodes, these systems could create multiple kill paths, minimizing the critical value of any single system in the network to ensure US forces remain effective in contested environments. In other words, by disaggregating functionality, the mosaic force can survive network and nodal attrition and still be effective. Mosaic combines the attributes of highly capable, high-end systems with the volume and agility afforded by numerous smaller force elements that can be rearranged into many different configurations or presentations.
Yet the mosaic force design concept is more than just an information architecture. Mosaic offers a comprehensive model for systems warfare, encompassing requirements and acquisition processes; the creation of operational concepts, tactics, techniques, and procedures; and force presentations and force-allocation action, in addition to combat operations. For example, by disaggregating and abstracting the operational architecture into OODA nodes instead of major programs, both requirements setting and acquisition can be simpler and faster. The ad hoc connectivity of a mosaic force enables faster and more adaptive tactical innovation to generate numerous potential kill paths. And because mosaic nodes are like LEGO blocks, force presentations can be tailored and surprising.
The attributes of a mosaic force design can help increase the speed of action across the US warfighting enterprise, whether quickly responding to urgent new requirements, integrating innovative or out-of-cycle capabilities, or developing new operational plans. The guiding principles and technologies that underpin a mosaic force design will help enable the United States to prevail in long-term competitions with great power adversaries.
Maintainers tow an MQ-9 into position for tests before ISR operations at Ali Al Salem AB, Kuwait. Sensor systems like these could work directly with combat weapon systems under the mosaic way of war. Photo: TSgt. Michael Mason IMPLEMENTING MOSAIC FORCE DESIGN
Implementing a mosaic force design will challenge doctrine, tradition, parochialism, bureaucratic fiefdoms, and even the pride of victories past. Yet, to support the priorities of the 2018 National Defense Strategy, the US must adapt its approach to warfare. To migrate to a mosaic force design, the US must:
Maintain commitments to current force structure and programs of record. While some defense leaders may advocate for bold moves, bold does not always mean wise. Terminating current programs and divesting force structure without replacements in hand will only exacerbate current vulnerabilities. The acquisition of high-end capabilities, such as the F-35 and B-21, should be accelerated, and the development of disaggregated elements must be introduced to create a future mosaic force. Aggressively invest in developing and fielding mosaic enablers. Artificial intelligence underlies nascent, critical technologies, such as autonomy for maneuver, decision-making, and network routing, which together make up the connective tissue that will enable a mosaic force and operational concept. These mosaic enablers will unlock the power of current platforms even as new, simple-function platforms reach the field. Mosaic enablers are about changing how the US employs its forces, not just what is in the inventory. Mosaic enablers create the path for the current force to migrate to a more effective, resilient, and surprising mosaic force. Experiment with mosaic operational concepts, architectures, and empowered command and control at the edge. Fully aligning information and command-and-control architectures with an operational concept is crucial to any force design. Continuous tactical experimentation with cutting-edge technologies, combined with rigorous operational analysis, is necessary to explore the art of the possible and how to exploit mosaic enabling technologies. These experiments would also help identify other needed technological investments and refine future doctrine and operational architectures. Conduct an operations-focused cost assessment of force design alternatives. A future US force capable of deterring or, if necessary, prevailing in a high-end systems warfare conflict will require greater capacity compared to the current force. Sufficient capacity (force size) as well as the right mix of capabilities will be critical to achieving the attack density needed to defeat great power aggression and sustain a deterrent posture in other theaters. High-quality wargaming of force design alternatives augmented by operational and cost analyses could help identify the right force size and mix needed to implement the 2018 NDS. Many trends already indicate the value and potential of mosaic operations. Early examples of systems, technologies, software, and architectures that are mosaic in nature are already being developed or fielded. Indeed, the US Defense Advanced Research Projects Agency and the services have been investing in maturing many of the mosaic enablers that they have already identified. Mosaic-type operations are not new to the US Air Force, and the service is perhaps the best candidate to take the lead role in developing a mosaic force design concept that could reshape DODs planning, processes, force structure, and how it executes its missions.
A nations military backstops the political grand strategy of any great power. The United States must out-adapt adversaries who have, and will continue to adapt to, an obsolescing US force design. Indeed, the United States can migrate to a more effective force design even as new elements are introduced to make it more effective in character and operational concept. What cannot migrate is resistance to this new way of wara mosaic force designwithin a defense culture conditioned by an atypical era of absolute military dominance, permissive threat environments, and a lack of peer adversaries. Swift decisions are needed at the apex to align thinking and resources to the enablers of mosaic warfare.
Three unmanned aerial systems at Edwards AFB, Calif. Drones could serve as observation nodes, communication links, or perform other functions, working as part of a disaggregated system of systems. Photo: SSgt. Rachel Simones TERMS OF REFERENCE
Systems Warfare
A theory of warfare that does not rely on attrition or maneuver to achieve advantage and victory over the adversary. Instead, systems warfare targets critical points in an adversarys system to collapse its functionality and render it unable to prosecute attack or defend itself. A major objective of this approach is to maximize desired strategic returns per application of force (achieve best value).
Force Design
Overarching principles that guide and connect a militarys theory of warfare and victory, its doctrine, operational concepts, force structure, capabilities, and other enterprise functions.
Disaggregated Element
Functionality that has been decomposed to its most basic practical combat element; for example, an observation or orientation function. These elements can range from simple functions, such as a single-sensor observation node, to more complex platforms, as needed, to be viable in the overall combat system, such as a multifunction aircraft.
Node
An element in the combat zone, whether disaggregated or multifunction, that participates in the operational architecture by receiving and sharing information.
Mosaic
A force design optimized for systems warfare. Modular and scalable, a mosaic force is highly interoperable and composed of disaggregated functions that create multiple, simultaneous kill webs against emerging target sets. A mosaic forces architecture is designed for speed, has fewer critical nodes, and remains effective while absorbing information and nodal attrition.
TOPICS: Foreign Affairs; Government; News/Current Events
KEYWORDS: oodaloop; warfare
1
posted on 06/24/2020 3: 08: 40 AM PDT
by robowombat
To: robowombat
2
posted on 06/24/2020 3: 09: 53 AM PDT
by Veggie Todd
(Voltaire: “Religion began when the first scoundrel met the first fool”.)
To: Veggie Todd
As near as I can penetrate the buzz words, he’s talking about moving away from expensive “do everything” platforms like the F35, to fighting with a bunch of cheaper single function platforms that get tied together.
For example, have an F35 pilot control a drone which goes ahead and spots targets, and have other drones which carry extra ordinance.
The weakness of this approach, is all these guys have to communicate in order to be coordinated, which turns their communication into a vulnerability.
3
posted on 06/24/2020 3: 27: 13 AM PDT
by SauronOfMordor
(A Leftist can’t enjoy life unless they are controlling, hurting, or destroying others)
To: robowombat
Destroying the tooling for F-22 to get funding for the F-35. If true, that kind of stupidity is, just stupid.
I have always been a proponent of numbers, and numbers are our enemy. Both the F-22 and F-35 numbers can’t stand attrition in battle. That kind of falls on F-15 and 16, with little in between. Old vs new. As time goes by the old are getting older, and their numbers are also reduced by age and technology. Where is the fighter that can fight, not a suicidal battle, but on even equal terms with the enemy. To be technologically superior requires more money than is practical to maintain the numbers necessary for battlefield attrition.
One very important part of the equation, is how and what the future air superiority is going to look like. Just what will it take to control the sky you wish to control. Is it airplanes, is it Satellites, is it tech and comm, or the expensive combination of all?
4
posted on 06/24/2020 4: 13: 48 AM PDT
by wita
(Always and forever, under oath in defense of Life, Liberty and the pursuit of Happiness.)
To: SauronOfMordor
Such an approach makes EMCON impossible. The adversary would just home in on the emissions, much like some missiles have a mode called home on jam as a counter to ECM.
5
posted on 06/24/2020 4: 45: 24 AM PDT
by Fred Hayek
(The Democratic Party is now the operational arm of the CPUSA)
To: Veggie Todd
(from the article):” Mosaic combines the attributes of highly capable, high-end systems with the volume and agility afforded by smaller,
less costly, and more numerous force elements, which can be rearranged into many different configurations or presentations.
When composed together into a mosaic force,
these smaller elements complete operational observeorientdecideact cycles (John Boyds OODA loops) and kill chains.”
As noted in the article, communication is the mosiac system vulnerability.
Limit the ability to communicate among these cheaper, smaller,more mobile units, and you render the opponent defenseless.
Other National opponents have observed our participation in various war games,
and have determined that interrupting communications renders these smaller units useless.
Disclaimer:
Opinions posted on Free Republic are those of the individual
posters and do not necessarily represent the opinion of Free Republic or its
management. All materials posted herein are protected by copyright law and the
exemption for fair use of copyrighted works.
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson

Amazon Honeycode Coding Platform for Non-Coders is the Future of Work
Amazon’s new cloud service Honeycode is a table-driven programming environment complete with Wizards and help boxes, designed to ease you into programming. If you are comfortable with complex Excel spreadsheets and formulas then the logical leap to Honeycode is straightforward. We created an account and started making picklists which linked 2 tables. Our goal wasn’t to develop a finished app but we wanted to see if it was an intuitive interface.
It is pretty straightforward. Democratized coding or citizen programming is a definition of the Future of Work.
Learning this and other similar solutions from Google, Microsoft or others is a great way to ensure you are a valuable employee in an ever-changing world.
You can build a highly interactive web and mobile applications backed by a powerful AWS-built database to perform tasks like tracking data over time and notifying users of changes, routing approvals, and facilitating interactive business processes. Using Amazon Honeycode, customers can create applications that range in complexity from a task-tracking application for a small team to a project management system that manages a complex workflow for multiple teams or departments. Customers can get started creating applications in minutes, build applications with up to 20 users for free, and only pay for the users and storage for larger applications.
Amazon explains the rationale for developing Honeycode:
Customers try to solve for the static nature of spreadsheets by emailing them back and forth, but all of the emailing just compounds the inefficiency because email is slow, doesn’t scale, and introduces versioning and data syncing errors. As a result, people often prefer having custom applications built, but the demand for custom programming often outstrips developer capacity, creating a situation where teams either need to wai for developers to free up or have to hire expensive consultants to build applications. What usually happens instead is that these applications just never get built. The chasm between using spreadsheets and building custom applications creates a situation where customers often experience unnecessary inefficiency, waste, and inaction.
They believe customers want the ability to create applications using the simplicity and familiarity of a spreadsheet, but with the data management capability of a database, the collaboration and notifications common in business applications, and a truly seamless web and mobile user experience.
The differentiator here is Honeycode relies on the familiar interface of a spreadsheet, but under the hood, offers the power of an AWS-developed database, so customers can easily sort, filter, and link data together to create data-driven, interactive applications. Users can easily create dynamic views and dashboards that are updated in real-time as the underlying data changes – something that is hard to do even with powerful relational databases. Applications built using Amazon Honeycode leverage the full power and scale of AWS, and can easily scale up to 100,000 rows in each workbook, without users having to worry about building, managing, and maintaining the underlying hardware and software. Amazon Honeycode does all of this under the covers by automating the process of building and linking the three tiers of functionality found in most business applications (database, business logic, and user interface), and then deploying fully interactive web and mobile applications to end-users so customers can focus on creating great applications without having to worry about writing code or scaling infrastructure.

Customers can get started by selecting a pre-built template, where the data model, business logic, and applications are pre-defined and ready-to-use (e.g. PO approvals, time-off reporting, inventory management, etc.). Or, they can import data into a blank workbook, use the familiar spreadsheet interface to define the data model, and design the application screens with objects like lists, buttons, and input fields. Builders can also add automations to their applications to drive notifications, reminders, approvals, and other actions based on conditions. Once the application is built, customers simply click a button to share it with team members. With Amazon Honeycode, customers can quickly and easily build multi-user, scalable, and collaborative web and mobile applications that allow them to act on the data that would otherwise be locked away in static spreadsheets.
“Customers have told us that the need for custom applications far outstrips the capacity of developers to create them,” said Larry Augustin, Vice President, Amazon Web Services, Inc. “Now with Amazon Honeycode, almost anyone can create powerful custom mobile and web applications without the need to write code.”
Amazon Honeycode is available today in US West (Oregon) with more regions coming soon.
“We’re excited about the opportunity that Amazon Honeycode creates for teams to build apps to drive and adapt to today’s ever-changing business landscape,” said Brad Armstrong, VP of Business and Corporate Development, Slack. “We see Amazon Honeycode as a great complement and extension to Slack and are excited about the opportunity to work together to create ways for our joint customers to work more efficiently and to do more with their data than ever before.”
SmugMug is a paid image sharing, image hosting service, and online video platform on which users can upload photos and videos. “We are excited to see the opportunity that Amazon Honeycode creates for our teams to build applications that help them respond to changing business conditions,” said Don MacAskill, CEO & Chief Geek, SmugMug & Flickr. “Based upon how easy it is to create new applications, it should really help our teams, and we can see it really taking off.”
The service is free for up to 20 users and as many 2500 rows of data in a spreadsheet that’s part of the product. AWS will charge based on storage and number of users.
Having a “citizen coding” platform for your cloud service is a must or you potentially miss out on the revenue associated with growing applications. This move is similar to having a graphical web development interface as a web registrar. The coding environment makes your primary moneymaker far stickier.
We applaud Amazon on their efforts and think Honeycode is a great Future of Work solution.
If you found this article interesting, we invite you to join us at the world’s only Future of Work Expo collocated with the ITEXPO #TECHSUPERSHOW, Feb 9-12, 2021 in Miami, Florida. See the video below.
Join others with $25B+ in IT buying power who plan 2021 budgets! Including 3,500+ resellers!
A unique experience with a collocated SD-WAN Expo, AIOps Expo and MSP Expo… Register now and you could win a Tesla on Feb 12th.
Prediction modelling studies for medical usage rates in mass gatherings: A systematic review
-
Loading metrics
Open Access
Peer-reviewed
Research Article
- Hans Van Remoortel,
- Hans Scheers,
- Emmy De Buck,
- Winne Haenen,
- Philippe Vandekerckhove
x
- Published: June 23, 2020
- https://doi.org/10.1371/journal.pone.0234977
Abstract
Background
Mass gathering manifestations attended by large crowds are an increasingly common feature of society. In parallel, an increased number of studies have been conducted that developed and/or validated a model to predict medical usage rates at these manifestations.
Aims
To conduct a systematic review to screen, analyse and critically appraise those studies that developed or validated a multivariable statistical model to predict medical usage rates at mass gatherings. To identify those biomedical, psychosocial and environmental predictors that are associated with increased medical usage rates and to summarise the predictive performance of the models.
Method
We searched for relevant prediction modelling studies in six databases. The predictors from multivariable regression models were listed for each medical usage rate outcome (i.e. patient presentation rate (PPR), transfer to hospital rate (TTHR) and the incidence of new injuries). The GRADE methodology (Grades of Recommendation, Assessment, Development and Evaluation) was used to assess the certainty of evidence.
Results
We identified 7,036 references and finally included 16 prediction models which were developed (n = 13) or validated (n = 3) in the USA (n = 8), Australia (n = 4), Japan (n = 1), Singapore (n = 1), South Africa (n = 1) and The Netherlands (n = 1), with a combined audience of >48 million people in >1700 mass gatherings. Variables to predict medical usage rates were biomedical (i.e. age, gender, level of competition, training characteristics and type of injury) and environmental predictors (i.e. crowd size, accommodation, weather, free water availability, time of the manifestation and type of the manifestation) (low-certainty evidence). Evidence from 3 studies indicated that using Arbon’s or Zeitz’ model in other contexts significantly over- or underestimated medical usage rates (from 22% overestimation to 81% underestimation).
Conclusions
This systematic review identified multivariable models with biomedical and environmental predictors for medical usage rates at mass gatherings. Since the overall certainty of the evidence is low and the predictive performance is generally poor, proper development and validation of a context-specific model is recommended.
Citation: Van Remoortel H, Scheers H, De Buck E, Haenen W, Vandekerckhove P (2020) Prediction modelling studies for medical usage rates in mass gatherings: A systematic review. PLoS ONE 15(6):
e0234977.
https://doi.org/10.1371/journal.pone.0234977
Editor: Tim Mathes, Universitat Witten/Herdecke, GERMANY
Received: October 8, 2019; Accepted: June 5, 2020; Published: June 23, 2020
Copyright: © 2020 Van Remoortel et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the manuscript and its Supporting Information files.
Funding: This work was made possible through funding from the Foundation for Scientific Research of the Belgian Red Cross. One of the activities of the Belgian Red Cross is providing first aid training to laypeople.
Competing interests: The authors have declared that no competing interests exist.
Introduction
A mass gathering has been defined by the World Health Organization (WHO) as an occasion, either organized or spontaneous, where the number of people attending is sufficient to strain the planning and response resources of the community, city, or nation hosting the manifestation [1].
Since mass gatherings attended by large crowds have become a more frequent feature of society, mass gathering medicine was highlighted as a new discipline at the World Health Assembly of Ministers of Health in Geneva in May 2014 [2]. As a consequence, the amount of international initiatives and meetings on mass gathering medicine has increased over the past decade as has the number of experts and the amount of publications on pre-event planning and surveillance for mass gathering. Mass gatherings are associated with increased health risks and hazards such as the transmission of communicable diseases, exacerbation of non-communicable diseases and comorbidities (e.g. diabetes, hypertension, COPD, cardiovascular events) and an impact on mental or physical health and psychosocial disorders [3]. Furthermore, the mental health consequences of traumatic incidents at mass gatherings can be prolonged with stress to people, families, and communities resulting in short-term fear of death as well as general distress, anxiety, excessive alcohol consumption, and other psychiatric disorders. If mass gatherings are improperly managed, this can lead to human, material, economic or environmental losses and impacts [4]. Therefore, the development of (cost-)effective methods for the planning and handling of the health risks associated with mass gatherings will strengthen global health security, prevent excessive emergency health problems and associated economic loss, and mitigate potential societal disruption in host and home communities [5].
To have a better understanding of the health effects of mass gatherings, a conceptual model for mass gathering health care was published in 2004 by Paul Arbon [6]. This model divided the key characteristics of mass gathering manifestations into three interrelated domains that may have an impact on the Patient Presentation Rate (PPR), the Transport To Hospital Rate (TTHR) and the level and extent of healthcare services: 1) the biomedical domain (i.e. biomedical influences such as demographic characteristics of the audience), 2) the psychosocial domain (i.e. psychological and social influences within mass gatherings including individual and crowd behaviour) and 3) the environmental domain (i.e. environmental features of a mass gathering including terrain and climatological conditions). Although most scientific papers on mass gathering are descriptive, i.e. without proper statistical analysis to predict medical usage rates, recently more prediction modelling studies have been developed and/or validated to have a better understanding of the patient care required at such manifestations. In order to formulate evidence-based, robust and effective interventions in the planning and management of mass gatherings, the scientific underpinning of Arbon’s conceptual model by a systematic screening, analysis and critical appraisal of prediction modelling studies for medical usage rates at mass gatherings was needed.
This systematic review aimed to identify multivariable prediction models for medical usage rates at mass gatherings, to summarize evidence for individual biomedical, psychosocial and environmental predictors at mass gatherings, and to summarise the predictive performance of these models.
Material and methods
Protocol and registration
We carried out a systematic literature review according to a predefined protocol, which was not registered beforehand [7]. We planned and reported the systematic review in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA checklist, S1 File) [8].
Eligibility criteria
Studies were eligible for inclusion if they answered the following PICO (Population, Intervention, Comparison, Outcome) question: “Which predictive models (I) are available for emergency services planning (O) during mass gathering manifestations (P)?” Full texts of potentially relevant articles were reviewed according to the following inclusion and exclusion criteria:
- Population: studies performed on all types of mass gatherings were included, such as sport (spectator) manifestations, (indoor/outdoor) music concerts and/or festivals. A mass gathering has been defined by the World Health Organization (WHO) as an occasion, either organized or spontaneous, where the number of people attending is sufficient to strain the planning and response resources of the community, city, or nation hosting the manifestation [1].
- Intervention/Predictors: we included studies that described a multivariable statistical model and extracted data of the predictors. Multivariable models represent a more realistic picture, rather than looking at a single variable (univariate associations) and they provide a powerful test of significance compared to univariate techniques. We included studies that had the intention to evaluate more than one predictor variable in a multivariable model, regardless of how many predictor variables remained in the final model. Evacuation models, opinion-based or theory-based models, and statistical models based on univariate (correlation) analysis were excluded.
- Outcome: we included medical usage rates such as Patient Presentation Rate (PPR), the Transport To Hospital Rate (TTHR) or the incidence of new injuries.
- Study design: prediction model development studies without external validation, prediction model development studies with external application in few independent mass gatherings or validation on an extensive list of independent mass gatherings (i.e. big data analysis), external model validation studies or studies that applied observations from few mass gatherings to another prediction model, were included according to the Checklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS) [9].
- Language: no language restrictions were applied.
Data sources and searches
Eligible studies were identified by searching the following databases: MEDLINE (via the PubMed interface), Embase (via Embase.com), the Cochrane Library, CINAHL, Web of Science and Scopus from the time of inception of the database until 14 May 2019. We developed search strategies for each database using index terms and free text terms (S2 File). Search yields were exported to a citation program (EndNote X7.5) and duplicates were discarded.
Study selection
Two reviewers (HVR and HS) independently screened the titles and abstracts of all references yielded by the search. Subsequently, the full text of each article that potentially met the eligibility criteria was obtained, and after a full-text assessment, studies that did not meet the selection criteria were excluded. Any discrepancies between reviewers were resolved by consensus or by consulting a third reviewer (EDB). For each included study, the reference lists and first 20 related citations in PubMed were screened for additional relevant records.
Data extraction
Data concerning study design (type of prediction modelling study), study aims and hypothesis, population characteristics (participation eligibility and recruitment method; participation description; details of mass gathering manifestations; study dates), candidate predictors (dichotomous/categorical/continuous variables), outcome measures (medical usage rates), effect sizes, statistical model, and study quality were extracted independently by the two reviewers.
Risk of bias assessment
The PROBAST (Prediction model Risk Of Bias ASsessment Tool) checklist items were used to assess the risk of bias and concerns for applicability for each study [10]. These items include 20 signalling questions across 4 domains: participants, predictors, outcome and analysis. Signalling questions were answered as ‘yes’, ‘probably yes’, ‘no’, ‘probably no’ or ‘no information’ and risk of bias was assessed for each domain. A domain where all signalling questions were answered as (probably) yes was judged as ‘low risk of bias’. An answer of (probably) no on 1 or more questions indicated the potential for bias, whereas no information indicated insufficient information. The risk of bias assessment was performed by the two reviewers independently.
Data synthesis
Individual predictors for medical usage rates.
The predictors (both the statistically significant (p<0.05) and statistically non-significant ones) from multivariable statistical models were pooled into different categories for each type of mass gathering manifestation (music concert, spectator sport manifestation, sport manifestation, mixed manifestation (sport, music, public exhibition)) corresponding to the three main domains for mass gathering health according to Arbon’s conceptual model: biomedical domain, psychosocial domain, environmental domain [6]. The direction of the association between the candidate predictors and the outcome variables was expressed as positive (e.g. night manifestations are associated with higher patient presentation rates compared to day manifestations) or negative (e.g. free water availability is associated with lower patient presentation rates).
Predictive performance of the models.
Predictive accuracy measures of the models, such as the R2 or the mean/median error, were extracted and summarized. The R2 is the square of the correlation and measures the proportion of variation in the dependent variable (i.e. medical usage rates) that can be attributed to the independent variable (i.e. predictor variables). The R2 indicates how well the regression model fits the observed data, ranging from 0% (no fit) to 100% (perfect fit). The predictive performance is considered as very weak (R2 of 0–4%), weak (R2 of 4 to 16%), moderate (R2 of 16 to 36%), strong (R2 of 36 to 64%) or very strong (R2 of 64% to 100%) [11].
Information on type of mass gathering, outcomes measured and the model that was validated with the data collected (i.e. the reference model), was summarized. Results were reported as a % underestimation or % overestimation (compared to the reference model).
Grading of the evidence
The GRADE approach (Grading of Recommendations, Assessment, Development and Evaluation) was used to assess the certainty of the evidence (also known as quality of evidence or confidence in effect estimates) [12]. Since no meta-analyses were possible, we used the GRADE guidelines for rating the certainty in evidence in the absence of a single estimate of effect [13]. The certainty of the evidence was graded as ‘high’ (further research is very unlikely to change our confidence in the effect estimate), ‘moderate’ (further research is likely to have an important impact on our confidence in the effect estimate), ‘low’ (further research is very likely to have an important impact on our confidence in the effect estimate and is likely to change the estimate) or ‘very low’ (any estimate of effect is very uncertain). The initial certainty level of the included prediction modelling studies was set at ‘high’ because the association between the predictors and outcomes was considered irrespective of any causal connection. Eight criteria were considered to further downgrade or upgrade the certainty of the evidence: five criteria who might potentially downgrade the overall certainty of the evidence (i.e. methodological limitations of the study, indirectness, imprecision, inconsistency and likelihood of publication bias) and three criteria who might potentially upgrade the overall certainty of the evidence (i.e. large effect, dose-response relation in the effect, and opposing plausible residual bias or confounding). Methodological limitations of the studies were assessed by considering the overall risk of bias judgement across studies based on the risk of bias assessment of the 4 PROBAST domains (i.e. participants, predictors, outcome and analysis). Indirectness was assessed by making a global judgement on how dissimilar the research evidence is to the PICO question at hand (in terms of population, interventions and outcomes across studies). The PROBAST tool was used to identify concerns regarding the applicability of each included study (i.e. when the populations, predictors or outcomes of the study differ from those specified in the review question) and an overall judgement across studies was made. Imprecision was assessed by considering the optimal information size (or the total number of events for binary outcomes and the number of participants in continuous outcomes) across all studies. A threshold of 400 or less is concerning for imprecision [14]. Results may also be imprecise when the 95% confidence intervals of all studies or of the largest studies include no effect and clinically meaningful benefits or harms. A global judgement on inconsistency was done by evaluating the consistency of the direction and primarily the difference in the magnitude of association between the predictor variables and the outcomes across studies (since statistical measures of heterogeneity were not available). Widely differing estimates of the effects indicated inconsistency. Publication bias was suspected when the body of evidence consisted of only small positive studies or when studies were reported in trial registries but not published.
A large magnitude of effect (i.e. large association between the predictor variable and outcome) was considered in case the relative risk or odds ratio is 2–5 or 0.5–0.2 with no plausible confounders in the majority of studies. Since this review was not focused on drugs or pharmaceutical agents, assessing a dose-response gradient was not applicable here. Finally, we only included studies that described a multivariable statistical model. Therefore, making a judgement whether all plausible confounders and biases from the prediction modelling studies unaccounted for in the adjusted/multivariate analyses (i.e. all residual confounders) and may lead to an underestimated association is not applicable here.
The two reviewers independently rated the certainty of the evidence for each outcome. Any discrepancies between reviewers were resolved by consensus or by consulting a third reviewer (EDB).
Results
Study selection
The systematic literature search resulted in a total of 7,036 citations (after removing duplicates), which were screened by two reviewers independently. Fig 1 represents the study selection flowchart. We included 16 studies that developed (n = 13) or externally applied (n = 3) a multivariable statistical model to predict medical usage rates in mass gathering manifestations. No studies were identified that externally validated prediction models against a big data set of mass gatherings.
Study characteristics
A total amount of >1,700 mass gathering manifestations (median[range]: 2.5[1–405] mass gatherings per study) attended by >48 million people were included to develop and/or validate these models. A mix of different types of mass gathering manifestations were included such as sports (spectator) manifestations (e.g. soccer games, auto races, (half-)marathon, n = 12 (75%)), music concerts (indoor/outdoor, n = 8 (50%)), fete/carnivals (n = 4, 25%), public exhibitions and ceremonial manifestations (n = 3, 19%). The majority of the studies (n = 12, 75%) were conducted in the USA (n = 8) and Australia (n = 4). The other studies were performed in Japan (n = 1), Singapore (n = 1), South Africa (n = 1) and The Netherlands (n = 1). Data were collected in 2 studies between 1980–1995, in 7 studies between 1995–2005, and in 7 studies between 2005–2015. Patient influx at first aid posts, expressed as total number or rate (per 1,000 or 10,000 attendees), was the outcome of interest in most of the studies (n = 14). Other outcomes included in the prediction model were the number of transfers to hospital (per 1,000 or 10,000 attendees) (n = 7) or the incidence of new (non-)medical injuries/complications (n = 3). All studies (except one) investigated whether at least one of the following environmental candidate predictors were associated with medical usage (rates): 1) weather conditions (n = 12: average/maximal daily temperature; humidity; heat index; dew point; % sunshine; wind speed; precipitation; barometric pressure), 2) crowd size (n = 12), 3) type of the manifestation (n = 12), 4) time of the manifestation (n = 7: night vs day; duration; year of the manifestation; season; day of the week), 5) venue accommodation (n = 7: mobile vs seated; indoor vs outdoor; bounded vs unbounded; focussed vs extended; maximum venue capacity; access to venue), 6) presence of alcohol (n = 4) or 7) free water availability (n = 1). Five studies included biomedical candidate predictors into their (univariate) model: 1) demographics (n = 5: age; gender; BMI), 2) level of competition (n = 3: running experience; running pace category; competitive vs non-competitive), 3) training characteristics (amount of training; type of training) and 4) injury status (n = 1: injuries incurred in the 12 months prior to the manifestation). None of the studies included psychosocial candidate predictors (e.g. crowd behaviour, reason for attendance, length of stay) in the model. Four studies used general linear regression analysis to develop a multivariable prediction model. Other types of generalized linear regression analysis included Poisson regression analysis (n = 4), logistic regression analysis (n = 3), and negative binomial regression analysis (n = 2). One study applied non-linear regression analysis (Classification And Regression Trees (CART)). Details on the characteristics of the included studies can be found in Table 1.
Risk of bias assessment
Individual judgements about each PROBAST risk of bias item (i.e. 20 signalling questions according to 4 domains) can be found in S1 and S2 Figs. PROBAST domains that were most prone to bias were the methods of analysis used (high risk of bias in 13 studies (81%)) and the participant recruitment (high risk of bias in 10 studies (62%)).
Factors that predict patient presentation (rate)
Ten multivariable regression models to predict patient presentation (rate) were developed. The following predictor variables were included in these models: weather conditions (in 8 models), crowd size (in 4 models), type of the manifestation (in 8 models), venue accommodation (in 4 models), time of the manifestation (in 3 models), free water availability (in 1 model), demographic information (in 1 model), level of competition (in 2 models). Six studies reported the full equation of the multivariable model to predict PPR [15, 19, 22, 24, 26, 27].
Fig 2 summarizes the environmental and biomedical predictors for patient presentation rate derived from multivariable regression models.
Fig 2. Biomedical and environmental variables from multivariable regression analyses predicting the Patient Presentation Rate (PPR).
The thickness of the box represents the number of multivariable models including the following predictors: level of competition (n = 2); demographics (n = 1, age, gender); crowd size (n = 4); accommodation (n = 4, mobile vs seated, bounded vs unbounded, outdoor vs indoor, type of venue access, maximum venue capacity); weather conditions (n = 8, humidity, temperature, dew point, presence of air conditioning, % sunshine, wind speed, precipitation); free water availability (n = 1); time of the event (n = 3; day vs night, day of the week); type of the event (n = 8, music events, sport events).
Weather conditions were found to be a significant factor to predict patient presentation (rate): humidity [15, 26], temperature [21], heat index (i.e. a combination of air temperature and relative humidity) [18, 24] and dew point [27] were positively associated with the number or rate of patient presentation at first aid posts. In one of our included studies, temperature (i.e. <23.5°C vs ≥23.5°C and <25.5°C vs ≥25.5°C) was included in a non-linear regression tree model with a lower total number of patient presentations in case of higher temperatures [26]. One study conducted in the USA found that the presence of air conditioning in a mixture of indoor mass gatherings (sport spectator manifestations, concerts, public exhibitions) was linked to a lower patient presentation rate [17]. In five studies, the following climatological parameters were not statistical significantly associated with patient presentation (rate): humidity, %sunshine and wind speed in 1 study [27], temperature in 2 studies [16, 27] and precipitation in 3 studies [18, 21, 24].
The type of the mass gathering manifestations was a significant predictor in 7 multivariable regression models. An Australian study of 201 mixed manifestations found that non-sporting manifestations resulted in a higher PPR. Three studies focused on sports (spectator) manifestations and demonstrated that football games, but also specific outdoor music manifestations (i.e. rock concerts) resulted in higher PPR compared to baseball games. One USA study investigating 6 automobile race weekends (NASCAR, Kansas Speedway, USA) found a higher PPR during race days versus practice days. In one study that predicted PPR for a mixture of 79 mass gatherings, the type of manifestation (athletic manifestations versus football; concerts versus football; public exhibitions versus football) was not a significant predictor [17].
Three studies, conducted on data from a mixture of mass gathering manifestations, found crowd size to be positively associated with the total number of patient presentations [15, 26, 31] whereas attendance was not associated with PPR in one study [16].
Manifestations at which the audience was predominantly seated (i.e. typically large stadium concerts) demonstrated a significantly lower presentation rate compared to manifestations where spectators tended to be more mobile [15]. Outdoor manifestations have statistically significant more medical presentations compared to indoor manifestations [15, 17]. A study with a multivariable prediction model development, analysing all 32 soccer games that were played in Japan at the FIFA World Cup 2002, concluded that higher venue capacity and easier venue access were linked to a lower PPR. Conflicting evidence was found for bounded (i.e. a manifestation contained within a boundary, often fenced) versus unbounded manifestations: one Australian model (based on data of 201 mixed manifestations) showed that bounded manifestations had a higher PPR [15] whereas one USA model (based on data of 79 mixed manifestations) showed that the unbounded manifestations resulted in a higher PPR [17]. One multivariate model to predict PPR at 405 music concerts in the USA showed that indoor versus outdoor manifestations was not a statistically significant predictor [16].
Two Australian studies found that manifestations organised during both day and night resulted in a higher PPR compared to manifestations organized during the day or night [15, 26]. One model to predict PPR at 403 music concerts found that day of the week was not a statistical significant predictor [24].
Free water availability (i.e. provided without cost to the patron) resulted in a lower PPR. In this USA model, it was shown that the absence of free water led to a two-fold increase in the PPR, even after controlling for other predictors such as weather conditions, percentage seating and alcohol availability [17].
One study found that competitiveness was positively associated with PPR during half-marathon running events, whereas the level of competition, expressed as the combination of the number of caution periods, number of lead changes, and the interval between the winner and second place, was not associated with PPR during auto race events.
Detailed information about the effect sizes of these multivariable predictors can be found in S1 Table.
Factors that predict transfer to hospital (rate)
Four multivariable regression models to predict transfer to hospital (rate) were developed. The following predictor variables were included in these models: weather conditions (in 4 models), crowd size (in 1 model), venue accommodation (in 2 models), time of the manifestation (in 1 model), type of the manifestation (in 4 models), number of patient presentations (in 1 model), and type of the injury (in 1 model). Two studies reported the full equation of the multivariable model to predict TTHR [15, 26].
The biomedical and environmental multivariable factors predicting the transfer to hospital rate are depicted in Fig 3.
Fig 3. Biomedical and environmental variables from multivariable regression analyses predicting The Transfer to Hospital Rate (TTHR).
The thickness of the box represents the number of multivariable models including the following predictors: type of injury (n = 1, intoxication vs medical vs trauma); number of patients (n = 1); accommodation (n = 2, mobile vs seated, bounded vs unbounded); weather conditions (n = 4, humidity, temperature, precipitation); time of the event (n = 1, day vs night); type of the event (n = 4, music genres, sport events); crowd size (n = 1).
Humidity, temperature or the heat index (≥32.2°C) were positively associated to the TTHR [15, 24, 26]. In one multivariable regression model developed with data from auto race events, mean temperature, precipitation and type of the manifestation (practice day vs race day) were not predictive for TTHR [21]. Music genres with a significant positive association with transport rates were alternative rock and country, whereas no association was found for other music genres, music festivals versus no music festivals [24]. Manifestation type was an important predictor in the non-linear regression tree model of Arbon et al. since this predictor determined 2 decision nodes [26].
Venue accommodation was a significant predictor for transportation rates in 2 studies: more transports were predicted in case the audience was seated or bounded (compared to mobile or unbounded) [15]. A seated vs mobile audience was also included in the recent Arbon regression tree model [26]. Crowd size and number of patients evaluated were positively associated with TTHR [15, 26]. Similar to the prediction of PPR, manifestations organized during both day and night (compared to day or night only) were predictive for TTHR [26]. Transport rates were highest with alcohol/drug intoxicated patients (p<0.001) and lowest with traumatic injuries (p = 0.004). Detailed information about the effect sizes of these predictors derived from multivariable models can be found in S2 Table.
Factors that predict the incidence of new sport injuries
Three multivariable regression models to predict the incidence of new sport injuries were developed: 2 models with data of running manifestations [20, 23] and 1 model with data of a mixture of sporting manifestations, fetes/carnivals, spectator sport manifestations, concerts/raves, and ceremonial manifestations [25]. None of the included studies reported the full equation of the multivariable model to predict the incidence of new sport injuries. The following predictor variables were included in these models: demographic information (in 3 models), type of the manifestation (in 1 model), time of the manifestation (in 2 models), level of competition (in 2 models) and training characteristics (in 1 model). Fig 4 shows the environmental and biomedical predictors for the incidence of new sport injuries derived from multivariable regression models.
Fig 4. Biomedical and environmental variables from multivariable regression analyses predicting new injuries.
The thickness of the box represents the number of multivariable models including the following predictors: training characteristics (n = 1, type of training, training frequency, type of terrain); level of competition (n = 2, running pace, running experience); demographics (n = 3, age, gender, BMI); time of the event (n = 2, season); type of the event (n = 1, sport events, carnival/fetes, music concerts).
The following environmental factors remained statistically significant in a multivariable model to predict new injuries: sporting manifestations and colder environmental conditions (expressed by the year of the manifestation or by season (i.e. winter versus spring)) resulted in a higher incidence of new injuries. Other types of manifestations (i.e. carnival, fete or rave concerts) or other seasons (i.e. summer vs spring; autumn versus spring) were not associated with new injuries [25].
Significant biomedical factors to predict new injuries included demographics (age and gender), level of competition and training characteristics. Older female runners during the 2 Oceans half-marathon in Cape Town (South Africa) had more incidence of medical complications (i.e. general or postural hypotension) compared to male runners and to younger female runners [20]. However, in a model with data from a mixture of sporting and non-sporting manifestations, the incidence of injuries was significantly higher in men [25]. During a (half-)marathon running race, a lower level of competition (expressed by a slower running pace (>7 minutes per km) or <5 years of running experience) and the frequency of interval training (i.e. sometimes versus always) were predictive for the incidence of new injuries. Two models to predict injuries during half marathon races found that specific information regarding demographics (gender, BMI), level of competition (running experience, running pace category) or training characteristics (training frequency, type of terrain) did not contribute to the prediction of injury incidence. Detailed information about the effect sizes of these multivariable predictors can be found in S3 Table.
Predictive performance of the models
Four studies reported the R2 of their model to predict PPR or TTHR. The predictive performance of the PPR models ranged from very low (R2 of 0.04 [16]) to (very) strong (R2 values of 0.64 [15] and 0.66 [19]). The predictive accuracy of the linear TTHR model by Arbon et al was moderate (R2 = 0.34) [26]. The non-linear models of Arbon et al. accurately predicted PP and TTH, as indicated by the low median error of 16 presentations per event and 1 transportation per event, respectively.
Three studies externally applied prediction models for mass gatherings by comparing the actual number of patient presentations or transports at 3 outdoor electronic dance music manifestations in the USA [28], a US spectator sport manifestations (i.e. automobile race, Baltimore Grand Prix) [29] and a city festival (i.e. Royal Air Show, Adelaide, Australia) [30] with the predicted number by the model developed by Arbon et al. [15], by Hartman et al. [32] and/or the retrospective (historical) analysis undertaken by Zeitz et al. [33]. The following predictor variables were included: weather conditions (in 3 models), crowd size (in 3 models), type of the manifestation (in 3 models), time of the manifestation (in 3 models), venue accommodation (in 3 models), presence of alcohol (in 1 model), demographic information (in 1 model).
The actual number of patient presentations and transfers to hospital in two US studies at urban auto-racing events and outdoor electronic dance music manifestations were underestimated by the Arbon/Hartman model (67–81% underestimation). The Arbon model and the Zeitz review overpredicted the actual number of casualties during the Royal Air Show in Adelaide (Australia) (22% and 10%, respectively). In this study, the actual number of daily ambulance transfers was underpredicted by 43% (Arbon model) and 53% (Zeitz review).
GRADE assessment
Although all included studies were observational, the initial certainty level was set at ‘high’ because the association between predictors and outcomes was irrespective of any causal connection. The overall certainty level (for all outcomes: PPR, TTHR, injury status) was downgraded with one level (from ‘high’ to ‘moderate’) due to risk of bias since overall risk of bias was considered as ‘high’ in all studies (S3 and S4 Figs). Overall concerns for applicability were present in 12 studies (75%), mainly because of the limited generalizability of the study participants (concerns for applicability in 9 studies (56%)) and the outcomes assessed (concerns for applicability in 5 studies (31%)) (S5 and S6 Figs). Therefore, the certainty level was further downgraded with one level due to indirectness (from ‘moderate’ to ‘low’). No reason was present for upgrading or further downgrading the certainty level due to imprecise or inconsistent results or publication bias.
Altogether, the final certainty in the effect estimates for the multivariable models predicting PPR, TTHR or injury status was considered as ‘low’. This implies that our confidence in the effect estimates is limited and that further research is very likely to have an important impact on our confidence and is likely to change the estimate.
Discussion
This systematic review included 16 studies that developed and/or externally applied a multivariable regression model to predict medical usage rates at mass gatherings. We identified a set of biomedical (i.e. age, gender, level of competition, training characteristics and type of injury) and environmental predictors (i.e. crowd size, accommodation, weather, free water availability, time of the manifestation and type of the manifestation) for PPR, TTHR and injury status. No evidence for psychosocial predictors was found. The overall certainty in the effect estimates is low due to risk of bias of the studies and limited generalizability (indirectness). Evidence from the studies that applied observations from few mass gatherings to another prediction model indicated that medical usage rates are consistently over/underestimated. Therefore, the development and validation of context-specific prediction models is recommended.
To the best of our knowledge, this is the first review that systematically screened, analyzed and critically appraised studies that developed and/or validated a statistical model to predict medical usage rates at mass gatherings. Until today, numerous descriptive papers and narrative reviews on this topic have been published. For example, Nieto and Ramos found 96 articles, published between 2000 and 2015 in the Scopus database, on the type of manifestations (main type of manifestations: sports (46%), music (25%) or religious/social content (23%)) and topics covered in the mass gathering literature (main topics: health care, PPRs and/or TTHRs, respiratory pathogens, surveillance and the global spread of diseases) [34]. Moore et al. concluded that the most important predictive factors to influence medical usage rates at large manifestations were the weather, alcohol and drug use and type of manifestation [35]. Baird et al. searched 4 biomedical databases and retained 8 studies suggesting a positive relationship between temperature/humidity and PPR [36]. Our review serves as a quantitative basis to predict medical usage rates at mass gatherings by identifying those variables that were included in multivariable prediction models.
The major strength of our systematic review is the use of a rigorous methodology including sensitive search strategies in six databases, comprehensive selection criteria (no restriction to population (types of mass gatherings) or outcomes (medical usage rates)) resulting in scientific evidence, judged and critically appraised by two reviewers independently. We restricted our selection of included studies to multivariable regression models and excluded studies that only used univariate regression analyses. Advantages of multivariable analysis include the ability to represent a more realistic picture than looking at a single variable. Indeed, apparent univariate associations may in reality be explained or confounded by a non-measured predictor variable. The risk of overlooking confounding or real predictor variables decreases by including more potential predictor variables in the model. Further, multivariable techniques can control association between variables by using cross tabulation, partial correlation and multiple regressions, and introduce other variables to determine the links between the independent and dependent variables or to specify the conditions under which the association takes place. This provides a more powerful test of significance compared to univariate techniques [37]. Although some scientists have questioned the concept of statistical significance [38, 39], the statistically significant predictors from these multivariable regression models apply as the best available scientific basis for which predictors are associated with increased medical usage rates.
There are three limitations concerning the critical appraisal of the included studies design, the lack of standardized data collection and analysis, and the limited generalizability of the results. Firstly, we critically appraised the included studies by using the PROBAST checklist items [10] and the GRADE approach for the case where no single estimate of effect is present [13]. Since the GRADE working group has not yet provided specific recommendations on how to rate the certainty of effect estimates of prediction modelling studies, future formal guidance is needed. Secondly, the methodology of data collection (both predictors and medical usage rates) and statistical analysis (i.e. different types of logistic, linear and non-linear regression analysis) varied substantially among the included studies. Hence, we were not able to conduct a meta-analysis. Although there is agreement on some broad concepts underlying mass-gathering health amongst an international group of mass gathering experts [40], more future scientific effort is needed to standardize data collection and statistical analysis when developing and/or validating a prediction model. Thirdly, most of the included prediction models were developed or validated in the USA or Australia. Since the interaction between the different biomedical, environmental, psychosocial factors and medical usage rates is complex, no extrapolation of these models to other contexts (e.g. other countries/continents, other type of manifestations, etc) can be performed. For example, climatological differences (temperature, humidity, precipitation, cloudiness, brightness, visibility, wind, and atmospheric pressure), the mixture of type of manifestations included in the prediction models (i.e. sport (spectator) manifestations, indoor/outdoor music concerts, carnivals, public exhibitions, etc.), but also difference in public health systems across countries (leading to different emergency care services delivery policies) hinders extrapolation. This limited generalizability was also confirmed by the 3 studies that applied observations from few mass gatherings to the prediction models of Arbon or Zeitz, showing significant under/overestimation of the medical usage rates when using an existing prediction model [28–30]. Future development of prediction models should therefore be validated both internally and externally, preferably against big data sets of various types of mass gatherings.
This systematic review scientifically underpinned Arbon’s conceptual model with a list of statistically significant biomedical (i.e. age, gender, level of competition, training characteristics and type of injury) and environmental predictors (i.e. crowd size, accommodation, weather, free water availability, time of the manifestations and type of the manifestations) for PPR, TTHR and injury status. The R2 (i.e. a statistical measure that represents the proportion of variance for medical usage rates that is explained by the biomedical/environmental predictors) of the multivariable regression models ranged from 4% to 66%. This implies that a (large) part (34–96%) of the variation in medical usage rates is as yet unexplained and dependent on unidentified factors. An important potential predictor (which is difficult to measure quantitatively) might be the characteristics of the first aid delivery services such as the amount and size of first aid posts (i.e. more posts will result in increased medical usage rates and smaller posts might result in a higher transfer to hospital rate) and the level of mobility of the first aid providers (i.e. more mobile teams will generate higher medical usage rates).
The current list of predictors are of clinical relevance for first aid or emergency services, experts and researchers involved in mass gathering. These predictors should be consistently measured in a standardized way to develop and/or validate future prediction models, in order to allow more cost-effective pre-event planning and resource provision. Another remaining question that needs to be answered in future research is how PPR and TTHR evolve over the time span of the mass gathering, in order to generate the most efficient use of (first aid) material and people (nurses, first aid providers, doctors, etc.). Since planning and preparing public health systems and services for managing a mass gathering is a complex procedure and requires a multidisciplinary approach, interdisciplinary research and international collaboration is of paramount importance to execute this future research agenda successfully [41].
Conclusion
This systematic review identified multivariable models that predict medical usage rates at mass gatherings. Different biomedical (i.e. age, gender, level of competition, training characteristics and type of injury) and environmental (i.e. crowd size, accommodation, weather, free water availability, time and type of the manifestation) predictors were associated with medical usage rates. Since the overall quality of the evidence is considered as low and no generic predictive model is available to date, proper development and validation of a context-specific model is recommended. Future international initiatives to standardize the collection and analysis of mass gathering health data are needed to enable the opportunity to conduct meta-analyses, to compare models across societies and modelling of various scenarios to inform health services. This will finally result in more cost-effective pre-hospital care at mass gatherings.
Supporting information
S1 Fig. Review authors’ judgements (for each included study) on the 20 signalling questions of the 4 PROBAST domains (participants–predictors–outcome–analysis).
Low risk of bias (answers ‘yes’ or ‘probably yes’ to signalling questions),
high risk of bias (answers ‘no’ or ‘probably no’ to signalling questions),
unclear (answer ‘no information’ to signalling questions). *Studies that applied observations from few mass gatherings to another prediction model (FitzGibbon 2017, Nable 2014, Zeitz 2005): items not applicable.
https://doi.org/10.1371/journal.pone.0234977.s001
(TIF)
S1 Table. Prediction model development studies for Patient Presentation Rate (PPR): Synthesis of findings of included studies.
r: correlation coefficient; RR: risk ratio; MW U: Mann-Whitney U. £ No raw data/SD’s available (or specify), effect size and CI cannot be calculated; ¥ Imprecision (large variability of results); † Imprecision (lack of data).
https://doi.org/10.1371/journal.pone.0234977.s007
(DOCX)
S2 Table. Prediction model development studies for Transfer To Hospital Rate (TTHR): Synthesis of findings of included studies.
£ No raw data/SD’s available (or specify), effect size and CI cannot be calculated; † Imprecision (lack of data).
https://doi.org/10.1371/journal.pone.0234977.s008
(DOCX)
S3 Table. Prediction model development studies for medical complications and injuries: Synthesis of findings of included studies.
OR: odds ratio; CI: Confidence Interval; BMI: Body Mass Index; £ No raw data/SD’s available (or specify), effect size and CI cannot be calculated; ¥ Imprecision (large variability of results); † Imprecision (lack of data).
https://doi.org/10.1371/journal.pone.0234977.s009
(DOCX)
Acknowledgments
Evi Verbecque is acknowledged for her help in developing the search strategies.
References
- 1.
World Health Organization (WHO). What is WHO’s role in mass gatherings? 2016. https://www.who.int/features/qa/mass-gatherings/en/. - 2.
Memish ZA, Zumla A, McCloskey B, Heymann D, Al Rabeeah AA, Barbeschi M, et al. Mass gatherings medicine: international cooperation and progress. Lancet. 2014;383(9934): 2030–2. pmid: 24857704. - 3.
Memish ZA, Steffen R, White P, Dar O, Azhar EI, Sharma A, et al. Mass gatherings medicine: public health issues arising from mass gathering religious and sporting events. Lancet. 2019;393(10185): 2073–84. Epub 2019/05/21. pmid: 31106753. - 4.
Aitsi-Selmi A, Murray V, Heymann D, McCloskey B, Azhar EI, Petersen E, et al. Reducing risks to health and wellbeing at mass gatherings: the role of the Sendai Framework for Disaster Risk Reduction. Int J Infect Dis. 2016;47: 101–4. Epub 2016/04/12. pmid: 27062983. - 5.
Tam JS, Barbeschi M, Shapovalova N, Briand S, Memish ZA, Kieny MP. Research agenda for mass gatherings: a call to action. Lancet Infect Dis. 2012;12(3): 231–9. Epub 2012/01/19. pmid: 22252148. - 6.
Arbon P. The development of conceptual models for mass-gathering health. Prehosp Disaster Med. 2004;19(3): 208–12. pmid: 15571196. - 7.
De Buck E, Pauwels NS, Dieltjens T, Vandekerckhove P. Use of evidence-based practice in an aid organisation: a proposal to deal with the variety in terminology and methodology. Int J Evid Based Healthc. 2014;12(1): 39–49. pmid: 24685899. - 8.
Moher D, Liberati A, Tetzlaff J, Altman DG, Group P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097. pmid: 19621072. - 9.
Moons KG, de Groot JA, Bouwmeester W, Vergouwe Y, Mallett S, Altman DG, et al. Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist. PLoS Med. 2014;11(10):e1001744. pmid: 25314315. - 10.
Moons KGM, Wolff RF, Riley RD, Whiting PF, Westwood M, Collins GS, et al. PROBAST: A Tool to Assess Risk of Bias and Applicability of Prediction Model Studies: Explanation and Elaboration. Ann Intern Med. 2019;170(1):W1–W33. Epub 2019/01/01. pmid: 30596876. - 11.
Evans JD. Straightforward statistics for the behavioral sciences.: Thomson Brooks/Cole Publishing Co.; 1996. - 12.
Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650): 924–6. pmid: 18436948. - 13.
Murad MH, Mustafa RA, Schunemann HJ, Sultan S, Santesso N. Rating the certainty in evidence in the absence of a single estimate of effect. Evid Based Med. 2017;22(3): 85–7. pmid: 28320705. - 14.
Guyatt GH, Oxman AD, Kunz R, Brozek J, Alonso-Coello P, Rind D, et al. GRADE guidelines 6. Rating the quality of evidence—imprecision. J Clin Epidemiol. 2011;64(12): 1283–93. Epub 2011/08/16. pmid: 21839614. - 15.
Arbon P, Bridgewater FH, Smith C. Mass gathering medicine: a predictive model for patient presentation and transport rates. Prehosp Disaster Med. 2001;16(3): 150–8. pmid: 11875799. - 16.
Grange JT, Green SM, Downs W. Concert medicine: spectrum of medical problems encountered at 405 major concerts. Acad Emerg Med. 1999;6(3): 202–7. pmid: 10192671. - 17.
Locoh-Donou S, Yan G, Berry T, O’Connor R, Sochor M, Charlton N, et al. Mass gathering medicine: event factors predicting patient presentation rates. Intern Emerg Med. 2016;11(5): 745–52. pmid: 26758062. - 18.
Milsten AM, Seaman KG, Liu P, Bissell RA, Maguire BJ. Variables influencing medical usage rates, injury patterns, and levels of care for mass gatherings. Prehosp Disaster Med. 2003;18(4): 334–46. pmid: 15310046. - 19.
Morimura N, Katsumi A, Koido Y, Sugimoto K, Fuse A, Asai Y, et al. Analysis of patient load data from the 2002 FIFA World Cup Korea/Japan. Prehosp Disaster Med. 2004;19(3): 278–84. pmid: 15571204. - 20.
Schwabe K, Schwellnus MP, Derman W, Swanevelder S, Jordaan E. Older females are at higher risk for medical complications during 21 km road race running: a prospective study in 39 511 race starters—SAFER study III. Br J Sports Med. 2014;48(11): 891–7. pmid: 24815927. - 21.
Selig B, Hastings M, Cannon C, Allin D, Klaus S, Diaz FJ. Effect of weather on medical patient volume at Kansas Speedway mass gatherings. J Emerg Nurs. 2013;39(4):e39–44. pmid: 22204886. - 22.
Tan CM, Tan IW, Kok WL, Lee MC, Lee VJ. Medical planning for mass-participation running events: a 3-year review of a half-marathon in Singapore. BMC Public Health. 2014;14: 1109. pmid: 25345356. - 23.
van Poppel D, de Koning J, Verhagen AP, Scholten-Peeters GG. Risk factors for lower extremity injuries among half marathon and marathon runners of the Lage Landen Marathon Eindhoven 2012: A prospective cohort study in the Netherlands. Scand J Med Sci Sports. 2016;26(2): 226–34. pmid: 25727692. - 24.
Westrol MS, Koneru S, McIntyre N, Caruso AT, Arshad FH, Merlin MA. Music Genre as a Predictor of Resource Utilization at Outdoor Music Concerts. Prehosp Disaster Med. 2017;32(3): 289–96. pmid: 28215192. - 25.
Woodall J, Watt K, Walker D, Tippett V, Enraght-Moony E, Bertolo C, et al. Planning volunteer responses to low-volume mass gatherings: do event characteristics predict patient workload? Prehosp Disaster Med. 2010;25(5): 442–8. pmid: 21053194. - 26.
Arbon P, Bottema M, Zeitz K, Lund A, Turris S, Anikeeva O, et al. Nonlinear Modelling for Predicting Patient Presentation Rates for Mass Gatherings. Prehosp Disaster Med. 2018;33(4): 362–7. pmid: 29962363. - 27.
Bowdish GE, Cordell WH, Bock HC, Vukov LF. Using regression analysis to predict emergency patient volume at the Indianapolis 500 mile race. Ann Emerg Med. 1992;21(10): 1200–3. pmid: 1416297. - 28.
FitzGibbon KM, Nable JV, Ayd B, Lawner BJ, Comer AC, Lichenstein R, et al. Mass-Gathering Medical Care in Electronic Dance Music Festivals. Prehosp Disaster Med. 2017;32(5): 563–7. pmid: 28625229. - 29.
Nable JV, Margolis AM, Lawner BJ, Hirshon JM, Perricone AJ, Galvagno SM, et al. Comparison of prediction models for use of medical resources at urban auto-racing events. Prehosp Disaster Med. 2014;29(6): 608–13. pmid: 25256003. - 30.
Zeitz KM, Zeitz CJ, Arbon P. Forecasting medical work at mass-gathering events: predictive model versus retrospective review. Prehosp Disaster Med. 2005;20(3): 164–8. pmid: 16018504. - 31.
Kman NE, Russell GB, Bozeman WP, Ehrman K, Winslow J. Derivation of a formula to predict patient volume based on temperature at college football games. Prehosp Emerg Care. 2007;11(4): 453–7. pmid: 17907032. - 32.
Hartman N, Williamson A, Sojka B, Alibertis K, Sidebottom M, Berry T, et al. Predicting resource use at mass gatherings using a simplified stratification scoring model. Am J Emerg Med. 2009;27(3): 337–43. pmid: 19328380. - 33.
Zeitz KM, Schneider DP, Jarrett D, Zeitz CJ. Mass gathering events: retrospective analysis of patient presentations over seven years. Prehosp Disaster Med. 2002;17(3): 147–50. pmid: 12627918. - 34.
Nieto PL, González-Alcaide G, Ramos JM. Mass gatherings: a systematic review of the literature on large events. Emergencias. 2017;29: 257–65. pmid: 28825282 - 35.
Moore R, Williamson K, Sochor M, Brady WJ. Large-event medicine—event characteristics impacting medical need. Am J Emerg Med. 2011;29(9): 1217–21. pmid: 20971598. - 36.
Baird MB, O’Connor RE, Williamson AL, Sojka B, Alibertis K, Brady WJ. The impact of warm weather on mass event medical need: a review of the literature. Am J Emerg Med. 2010;28(2): 224–9. pmid: 20159396. - 37.
Shiker MAK. Multivairate statistical analysis. British Journal of Science. 2012;6(1): 55–66. - 38.
Amrhein V, Greenland S, McShane B. Scientists rise up against statistical significance. Nature. 2019;567(7748): 305–7. pmid: 30894741. - 39.
Wasserstein RL, Schirm AL, Lazar NA. Moving to a world beyond “p<0.05". Am Stat. 2019;73(1):1–19. - 40.
Steenkamp M, Hutton AE, Ranse JC, Lund A, Turris SA, Bowles R, et al. Exploring International Views on Key Concepts for Mass-gathering Health through a Delphi Process. Prehosp Disaster Med. 2016;31(4): 443–53. pmid: 27212053. - 41.
World Health Organization (WHO). Public health for mass gatherings: key considerations. 2015. https://www.who.int/ihr/publications/WHO_HSE_GCR_2015.5/en/.