GOVERNMENT IT REPORT

Congress Backs Billions for Tech R&D Though Enactment Uncertain

The U.S. technology industry is guardedly supporting a massive legislative package designed to address a range of issues affecting the sector, including a federal commitment to add billions of dollars to government technology research and development programs.

The legislation, dubbed the U.S. Innovation and Competition Act (USICA), was approved June 8 by a rare bipartisan vote of 68-32 in the U.S. Senate.

A major factor driving the legislation is the contention that the U.S. has fallen behind China in a national effort to support technology development, including information technology and the digital economy. Sen. Todd Young, R-Ind., characterized the legislation as “a landmark bill to out-compete China in key emerging technology areas critical to our national security.”

Specific areas of focus in the bill include artificial intelligence, machine learning and other software advances; high performance computing, semiconductors, and advanced computer hardware; quantum computing and information systems; biotechnology, medical technology, genomics, and synthetic biology; cybersecurity, and energy innovation including battery technology.

Multiple Amendments

The package started out as the Endless Frontier Act, co-sponsored by Sen. Chuck Schumer, D-N.Y., and Sen. Young, among others. That bill was ambitious enough as first introduced in April. The bill focused on boosting funding for the National Science Foundation (NSF) including creation of a new NSF “Directorate for Technology.”

However, the bill attracted additional provisions during the legislative process, including some which were really complete stand-alone bills that were rolled into the final package, resulting in a 2,300-page proposal.

The legislation even includes the CHIPS for America program to provide $52 billion in federal support for domestic semiconductor development and production.

For companies involved in IT and the digital economy, an important part of the USICA bill deals with significantly boosting federal investments in technology through the National Science Foundation. Both the proposed funding levels and the government’s approach to managing those investments are critical issues requiring close attention for the IT sector.

Under the Senate USICA bill, NSF’s annual budget would nearly double to an average of $16 billion per year over five years from 2022 to 2026. The current fiscal 2021 budget is $8.5 billion. This huge boost in investment is largely related to funding a new NSF Directorate for Technology and Innovation at an average of nearly $6 billion annually from 2022 to 2026.

Private Sector Partnerships

Private sector IT and digital economy entities will be major beneficiaries of the new NSF directorate. The purpose of the directorate is to “strengthen U.S. leadership in critical technologies,” and to “accelerate technology commercialization.”

The legislation further provides that the proposed directorate should “direct basic and applied research, advanced technology development, and commercialization support in the key technology focus areas” listed in the bill. Through the directorate NSF is expected to form partnerships with other federal agencies as well as with “academia, the private sector, and nonprofit entities.”

The move to establishing closer ties between NSF and the private sector has raised concerns about the foundation’s traditional role of engaging in “pure” or basic research unfettered by commercial considerations.

Robert Atkinson, president of the Information Technology Innovation Foundation (ITIF) said soon after the NSF directorate was proposed there was “pushback.” The scientific community, he noted “resisted the idea that government would be asking them to do work related to a critical national mission, and to hold them accountable for ensuring that their work helped accomplish that mission.”

While ITIF supports the provisions in USICA which create the new NSF technology directorate, Atkinson told the E-Commerce Times that “an even more effective approach would be to establish such a directorate as a free-standing agency.”

A separate umbrella entity tuned in to the full range of federal technology activities would avoid any conflicts with the traditional missions of NSF and other agencies, while creating a national effort to support both government and commercial private sector technology development, he contends.

Atkinson favors the creation of a National Advanced Industry and Technology Agency, at the same size as NSF, to “analyze U.S. industry strengths, weaknesses, opportunities, and threats, and to respond with well-resourced solutions ranging from support for domestic research and development to production partnerships and investment in advanced research facilities.”

More than 50 other countries have established such agencies, he noted.

“It is clear that NSF and the science community are uneasy” with taking on applied science with commercial connections versus NSF’s traditional mission, Atkinson said, adding that NSF would “vastly prefer” just getting much larger appropriations.

“But that would do little to help U.S. technology-based competitiveness,” he said. Establishing a separate agency would let NSF continue its mission while enabling applied and industry focused research to be funded elsewhere, he observed.

Advocates, Opponents Take Positions

Whether the USICA package represents a comprehensive approach to developing a national technology capability through government intervention — or a confusing legislative hodgepodge — is likely to be in the eye of the beholder. Differences related to NSF’s future mission aren’t the only potential stumbling blocks affecting eventual enactment of the USICA legislation.

For example, the Computer and Communications Industry Association (CCIA) approved the major USICA goal of supporting increased federal investments for technology research and development, but found other parts of the bill “worrisome.”

One section of the bill deals with “Country of Origin Labeling” (COOL) requirements associated with the internet marketing of internationally sourced products. While COOL especially impacts the U.S. retail marketing sector, digital economy entities have concerns as well.

Arthur Sidney, vice-president of CCIA noted that country of origin provisions in the bill present implementation challenges “given the volume of transactions and no consistent, uniform, and administrable definition” related to COO coverage. “Country of Origin in the international trade context is difficult to administer by customs and authorities, let alone a digital service,” he told the E-Commerce Times.

Sidney also expressed concern about the section of USICA aimed at curbing the use of censorship as a trade barrier tool. Language that would refer such activities to legal authorities for action was “scaled back” in the Senate bill, he contended. It was replaced by provisions which simply called for an annual report to Congress with a list of countries that use censorship as a barrier to digital trade and a description of the agencies efforts to address digital trade disruptions, he said.

While the U. S. Chamber of Commerce expressed general support for the bill in a June 9 statement, Neil Bradley, executive vice president and chief policy officer said the Chamber had “ongoing concerns,” about the bill. In a letter to the Senate in May, the Chamber advocated elimination of the Country of Origin section and expressed reservations about provisions that impact e-commerce such as “Cyber Shield, copyright, and information in the public domain.”

The Senate bill must now be considered by the House of Representatives where similar legislation was approved Monday. However, the House bill only focused on the research scope of NSF and the U.S. Department of Energy. The House bill also includes a new NSF technology solutions directorate but funded at a much lower level than the Senate version.

The ultimate outcome for USICA could take several paths. Since amendments to the Senate bill were added with relative ease, they could be scuttled just as easily, allowing the core NSF and national technology investment elements to be the focus for legislators. Or the collective controversies associated with the different versions of the legislation could stymie adoption.

Regarding chances for enactment of USICA, CCIA’s Sidney noted “We aren’t sanguine, but we are hopeful that this will see the light of day. While it’s not perfect, and we have some concerns, we are hopeful that it can help businesses, and serve as one of the building blocks to protect U.S. innovation and technology.”

John K. Higgins has been an ECT News Network reporter since 2009. His main areas of focus are U.S. government technology issues such as IT contracting, cybersecurity, privacy, cloud technology, big data and e-commerce regulation. As a freelance journalist and career business writer, he has written for numerous publications, includingThe Corps Report and Business Week.Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories
More by John K. Higgins
More in Government

How does the quality of customer service delivered by government compare to that of the private sector?
Loading ... Loading ...

E-Commerce Times Channels

The Rise of Digital Ad Taxes Could Impact Online Marketplaces

For years, affiliate marketers, social media companies, online marketplace platforms, and search engines alike have enjoyed the seemingly ubiquitous tax-free landscape from their digital activities afforded to them by the United States’ Internet Tax Freedom Act of 1998. However, that could all be changing soon.

On the horizon, taxpayers should prepare themselves for the next evolution in state taxation: digital advertising taxes. As policymakers and tax practitioners eagerly look to Maryland spearheading the first-in-the-nation digital advertising tax (DAT), legal concerns have been raised about the validity of Maryland’s recently enacted tax.

Poised as a gross receipts tax on in-state digital advertising revenues, Maryland’s DAT takes aim at large technology companies that have benefited from years of digital advertising as the catalyst for generating insuperable amounts of wealth.

Maryland’s digital ad tax applies a graduated rate that escalates based on the taxpayer’s global annual revenues. The tax brackets are as follows:

  • 2.5 percent of the assessable base for persons with global annual gross revenues of US$100 million through $1 billion
  • 5 percent of the assessable base for persons with global annual gross revenues of more than $1 billion through $5 billion
  • 7.5 percent of the assessable base for persons with global annual gross revenues of more than $5 billion through $15 billion
  • 10 percent of the assessable base for persons with global annual gross revenues exceeding $15 billion

Currently, Maryland’s DAT applies to taxpayers with at least $1 million of annual gross revenues derived from digital advertising services within Maryland and taxpayers with global annual gross revenues of $100 million or more.

Taxpayers subject to the tax are expected to file an annual declaration of estimated tax and make quarterly estimated tax payments. Maryland’s first declaration of estimated tax is due April 15, 2022. In addition, taxpayers must maintain books and records of their digital advertising services provided in Maryland to validate the basis for their apportionment and, ultimately, the taxpayer’s calculated digital ad tax.

The Maryland Comptroller has issued proposed regulations to provide clarity on the calculation. The Comptroller proposes to calculate the numerator of the apportionment factor by determining whether the device showing the advertising is in Maryland. The denominator is the number of devices that have accessed the digital advertising services from any location. This fixes one of the issues with the statute in which the denominator was only devices in the U.S., but the revenues were worldwide revenues.

Constitutional Challenges

Expanding on the legality of Maryland’s digital ad tax, the law presents unique constitutional challenges at the federal level that will undoubtedly be an uphill legal battle for the state. Maryland’s DAT law creates a legal inequity, in that, the law unfairly targets online advertisers, while not applying the same rules to other forms of advertising in the state, such as, radio, television, and print.

The Internet Tax Freedom Act was created over twenty years ago to prevent this type of digital discrimination. However, similar to the surprising outcome for many tax practitioners in the Wayfair case, it’s entirely possible the federal law will evolve to service the ever-changing e-commerce landscape.

The legal battles include the complaint filed in federal district court by the U.S. Chamber of Commerce and various trade groups. Their complaint states that the new law violates the dormant Commerce Clause, the Fourteenth Amendment Due Process Clause, and the Internet Tax Freedom Act. They argue that the tax is discriminatory in that it favors in-state companies, and it punishes out-of-state activities as the tax base specifically includes gross receipts from outside the state of Maryland.

In addition, Comcast and Verizon have filed a separate complaint in state court. Their complaint challenges the tax on grounds similar to the federal district court case and on additional grounds that it violates the Supremacy Clause and the Declaration of Rights in the Maryland Constitution.

New York, Connecticut, Indiana, Montana, Nebraska, Oregon and Washington, have all drafted or proposed similar legislation for gross receipts consumption-based taxes on digital advertising services. In 2021 alone, twelve DATs or similar tax-type data bills were introduced in various states.

However, many of these bills have not been enacted because state legislators are waiting on how Maryland’s digital advertising tax will be implemented amidst the administrative, economic, and legal challenges.

Is California Next?

Maryland’s new law has put many California tech companies on notice. Moreover, the question is: “Will California enact its own DAT?” Admittedly, it’s too early to make any reasonable predictions. While it’s possible California could enact a DAT, or something similar, it’s unlikely to happen anytime soon.

First, the Internet Tax Freedom Act would need to be challenged by state lawmakers, adjudicated by the Supreme Court, and changed. This is no easy feat. Next, California would need to pass its own law either through California legislative and executive branches, or potentially through a state proposition.

Given that California is already seen as an unfriendly business state compared to Texas, Tennessee, and Florida, a California DAT could create more incentives for companies to leave the state or cease to do business in California altogether.

Additionally, tech is a prominent and influential business sector in California. The industry contributes to the state’s corporate income tax revenue, and it creates jobs, leading to an echo revenue stream generated by individual California resident taxpayers.

From a state sourcing perspective, determining where to source digital ad revenues can be problematic, especially, when an ad’s reach, impression location, and impact are unknown to the advertiser.

By California regulations standards, Section 25136-2 provides cascading rules on how to source services and intangibles, including digital ad revenue. In situations where either the benefit of the service or intangible is indeterminable, California allows taxpayers to use a reasonable approximation approach, whereby, sales are bifurcated by jurisdiction based on a common variable, such as census data population, ad impressions, unique user IDs, customer quantity, sales metrics, etc.

Furthermore, California’s sourcing regulations are soon changing. Proposed amendments to the sourcing of sales other than tangible personal property go into effect starting 2023.

What does the future hold for online advertisers? At this point, it’s unclear. Many of the DAT and sales of personal data laws currently proposed are targeting Big Tech, but there will certainly be a ripple effect amongst small businesses who use their services. Online marketplaces will need to adapt, and more importantly, stay educated on this constantly evolving issue.

Brandon Gillum is a State and Local Tax Manager with accounting and advisory firm BPM. Email Brandon.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories

Think Tank Calls for Policymakers To Grow AI, Not Choke It

Policymakers should be fostering the use of artificial intelligence in making workforce decisions, not inhibiting it, according to the Center for Data Innovation.

In a report released Monday, the global think tank called on governments to encourage AI adoption and establish guardrails to limit harms.

“The dominant narrative around AI is one of fear, so policymakers need to actively support the technology’s growth,” the report’s author, policy analyst Hodan Omaar, said in a statement. “It is critical for lawmakers to avoid intervening in ways that are ineffective, counterproductive, or harmful to innovation.”

The report explained that AI-enabled tools can support workforce decisions by helping businesses manage their existing employees, as well as recruit and hire new ones.

They can also boost productivity among employers, such as by reducing the time needed to hire new employees, increasing retention rates, and improving communications and team dynamics among workers.

In addition, the report continued, these tools may help employers reduce human biases when hiring, decide on compensation, and make other employment-related decisions.

AI Concerns To Address

The report maintained that to successfully deploy AI for workforce decisions, employers will need to address potential concerns.

Some of those concerns include ensuring that the increased use of AI does not exacerbate existing biases and inequalities, metrics AI tools produce are fair and accurate, increased monitoring of employees is not unduly invasive, and processing of biometric does not reveal sensitive personal information about employees that they may wish to keep private, such as data about their emotions, health, or disabilities.

To address these concerns, the report continued, several policymakers and advocacy groups have called for new public policies that apply the “precautionary principle” to AI, which says that government should limit the use of new technology until it is proven safe.

“In short, they favor restricting the use of AI because they believe it is better to be safe than sorry,” Omaar wrote. “But these policies do more harm than good because they make it more expensive to develop AI, limit the testing and use of AI, and even ban some applications, thereby reducing productivity, competitiveness, and innovation.”

“Instead,” she noted, “policymakers should pave the way for widespread adoption of AI in the workplace while building guardrails, where necessary, to limit harms.”

Employer and Employee Benefits

Artificial intelligence can benefit both employers and employees, added Julian Sanchez, a senior fellow at the Cato Institute, a public policy think tank in Washington, D.C.

“Ideally AI can help businesses make both more efficient decisions — by synthesizing and analyzing much more granular data than human managers are able to process — and also more fair decisions, by providing a uniform mechanism for evaluating employees that can help filter out the biases of individual managers,” he told TechNewsWorld.

“Plenty of real-world applications of workplace AI are beneficial for employees as well — finding ways to reduce on-the-job injuries or burnout, not just ramp up productivity,” he added.

When AI systems can become a problem is when people become too dependent on them, noted Craig Le Clair, a vice president and a principal analyst at Forrester Research.

“The system becomes a black box to humans,” he told TechNewsWorld. “They can’t explain how a decision was made so they don’t know if it was biased or not.”

Algorithm Bias

Sanchez explained that algorithms can have biases in a number of ways. They can replicate biases in the data they’re trained on. They can also be insensitive to circumstances humans would be aware of.

“When that’s the case, the bias gets scaled across the entire firm or even a whole sector, if a particular tool is widespread — though when biases are identified, they’re usually easier to correct than their human counterparts,” he said.

“The ability to process granular data can also be a double-edged sword, because it enables a level of minute monitoring that can feel dehumanizing,” Sanchez continued.

“It can feel like important decisions about your career depend on an opaque algorithm that may not be intelligible to the employee in the way we expect supervisors’ decisions to be,” he explained.

John Carey, managing director in the technology practice at AArete, a global management consulting firm, added that AI can’t easily match experience or instinct around employees, making sure that they are treated with empathy.

“We, as humans, can detect far more about a behavioral issues from a conversation rather than relying on just data,” he told TechNewsWorld. “So it’s important that AI is used as a support tool rather than be relied on exclusively.”

Data Quality Important

Jim McGregor, founder and principal analyst of Tirias Research, a high-tech research and advisory firm in Phoenix, Ariz. explained that how an AI tool performs depends on the quality of the data it’s given and the bias of that information.

“A lot of the information going into AI systems will be coming from employees,” he told TechNewsWorld. “Everyone, no matter who you are, has biases. It’s hard to break those biases.”

“AI is a tool,” he said. “It should not be the only tool that any employer uses for hiring, firing or advancing people.”

“AI has the potential to improve workforce decisions,” he added, “but you have to be conscious of its upside and downside when using it as a tool.”

Advice for Policymakers

In her report, Omaar proposed eight principles to guide policymakers when making decisions about AI:

  • Make government an early adopter of AI for workforce decisions and share best practices;
  • Ensure data protection laws support the adoption of AI for workforce decisions;
  • Ensure employment nondiscrimination laws apply regardless of whether an organization uses AI;
  • Create rules to safeguard against new privacy risks in workforce data;
  • Address concerns about AI systems for workforce decisions at the national level;
  • Enable the global free flow of employee data;
  • Do not regulate the input of AI systems used for workforce decisions; and
  • Focus regulation on employers, not AI vendors.

Light Touch

Sanchez endorsed the light government touch advocated in Omaar’s recommendations.

“I’m inclined to agree with the CDI report that we probably don’t need AI-specific rules in most cases, though it may take some time to figure out how to apply existing rules to decisions made with AI assistance,” he said.

“If there are things we want to require or forbid employers to do, then at some level it shouldn’t matter whether they do those things with microprocessors or human brains — trying to directly regulate software design is usually a mistake,” he observed.

“Anyone who thinks they can regulate AI is foolish,” added McGregor.

“If you start slapping regulations on it, you’re going to make it ineffective and limit innovation,” he said. “You’re going to have more downside than upside.”

John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories
More by John P. Mello Jr.
More in Artificial Intelligence